Oct  9 05:00:23 np0005478304 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  9 05:00:23 np0005478304 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 05:00:23 np0005478304 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:23 np0005478304 kernel: BIOS-provided physical RAM map:
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 05:00:23 np0005478304 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 05:00:23 np0005478304 kernel: NX (Execute Disable) protection: active
Oct  9 05:00:23 np0005478304 kernel: APIC: Static calls initialized
Oct  9 05:00:23 np0005478304 kernel: SMBIOS 2.8 present.
Oct  9 05:00:23 np0005478304 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 05:00:23 np0005478304 kernel: Hypervisor detected: KVM
Oct  9 05:00:23 np0005478304 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 05:00:23 np0005478304 kernel: kvm-clock: using sched offset of 3182421395 cycles
Oct  9 05:00:23 np0005478304 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 05:00:23 np0005478304 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 05:00:23 np0005478304 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 05:00:23 np0005478304 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 05:00:23 np0005478304 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 05:00:23 np0005478304 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 05:00:23 np0005478304 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 05:00:23 np0005478304 kernel: Using GB pages for direct mapping
Oct  9 05:00:23 np0005478304 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Early table checksum verification disabled
Oct  9 05:00:23 np0005478304 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 05:00:23 np0005478304 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 05:00:23 np0005478304 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 05:00:23 np0005478304 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 05:00:23 np0005478304 kernel: No NUMA configuration found
Oct  9 05:00:23 np0005478304 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 05:00:23 np0005478304 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct  9 05:00:23 np0005478304 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct  9 05:00:23 np0005478304 kernel: Zone ranges:
Oct  9 05:00:23 np0005478304 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 05:00:23 np0005478304 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 05:00:23 np0005478304 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:23 np0005478304 kernel:  Device   empty
Oct  9 05:00:23 np0005478304 kernel: Movable zone start for each node
Oct  9 05:00:23 np0005478304 kernel: Early memory node ranges
Oct  9 05:00:23 np0005478304 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 05:00:23 np0005478304 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 05:00:23 np0005478304 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 05:00:23 np0005478304 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 05:00:23 np0005478304 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 05:00:23 np0005478304 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 05:00:23 np0005478304 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 05:00:23 np0005478304 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 05:00:23 np0005478304 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 05:00:23 np0005478304 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 05:00:23 np0005478304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 05:00:23 np0005478304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 05:00:23 np0005478304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 05:00:23 np0005478304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 05:00:23 np0005478304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 05:00:23 np0005478304 kernel: TSC deadline timer available
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Max. logical packages:   4
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Max. logical dies:       4
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Max. dies per package:   1
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Max. threads per core:   1
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Num. cores per package:     1
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Num. threads per package:   1
Oct  9 05:00:23 np0005478304 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: setup PV sched yield
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 05:00:23 np0005478304 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 05:00:23 np0005478304 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 05:00:23 np0005478304 kernel: Booting paravirtualized kernel on KVM
Oct  9 05:00:23 np0005478304 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 05:00:23 np0005478304 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 05:00:23 np0005478304 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: PV spinlocks enabled
Oct  9 05:00:23 np0005478304 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:23 np0005478304 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 05:00:23 np0005478304 kernel: random: crng init done
Oct  9 05:00:23 np0005478304 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: Fallback order for Node 0: 0 
Oct  9 05:00:23 np0005478304 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 05:00:23 np0005478304 kernel: Policy zone: Normal
Oct  9 05:00:23 np0005478304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 05:00:23 np0005478304 kernel: software IO TLB: area num 4.
Oct  9 05:00:23 np0005478304 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 05:00:23 np0005478304 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 05:00:23 np0005478304 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 05:00:23 np0005478304 kernel: Dynamic Preempt: voluntary
Oct  9 05:00:23 np0005478304 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 05:00:23 np0005478304 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 05:00:23 np0005478304 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 05:00:23 np0005478304 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 05:00:23 np0005478304 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 05:00:23 np0005478304 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 05:00:23 np0005478304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 05:00:23 np0005478304 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 05:00:23 np0005478304 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:23 np0005478304 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:23 np0005478304 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 05:00:23 np0005478304 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 05:00:23 np0005478304 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 05:00:23 np0005478304 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 05:00:23 np0005478304 kernel: Console: colour VGA+ 80x25
Oct  9 05:00:23 np0005478304 kernel: printk: console [ttyS0] enabled
Oct  9 05:00:23 np0005478304 kernel: ACPI: Core revision 20230331
Oct  9 05:00:23 np0005478304 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 05:00:23 np0005478304 kernel: x2apic enabled
Oct  9 05:00:23 np0005478304 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 05:00:23 np0005478304 kernel: kvm-guest: setup PV IPIs
Oct  9 05:00:23 np0005478304 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 05:00:23 np0005478304 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 05:00:23 np0005478304 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 05:00:23 np0005478304 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 05:00:23 np0005478304 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 05:00:23 np0005478304 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 05:00:23 np0005478304 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 05:00:23 np0005478304 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 05:00:23 np0005478304 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 05:00:23 np0005478304 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 05:00:23 np0005478304 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 05:00:23 np0005478304 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 05:00:23 np0005478304 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 05:00:23 np0005478304 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 05:00:23 np0005478304 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 05:00:23 np0005478304 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 05:00:23 np0005478304 kernel: Freeing SMP alternatives memory: 40K
Oct  9 05:00:23 np0005478304 kernel: pid_max: default: 32768 minimum: 301
Oct  9 05:00:23 np0005478304 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 05:00:23 np0005478304 kernel: landlock: Up and running.
Oct  9 05:00:23 np0005478304 kernel: Yama: becoming mindful.
Oct  9 05:00:23 np0005478304 kernel: SELinux:  Initializing.
Oct  9 05:00:23 np0005478304 kernel: LSM support for eBPF active
Oct  9 05:00:23 np0005478304 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 05:00:23 np0005478304 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 05:00:23 np0005478304 kernel: ... version:                0
Oct  9 05:00:23 np0005478304 kernel: ... bit width:              48
Oct  9 05:00:23 np0005478304 kernel: ... generic registers:      6
Oct  9 05:00:23 np0005478304 kernel: ... value mask:             0000ffffffffffff
Oct  9 05:00:23 np0005478304 kernel: ... max period:             00007fffffffffff
Oct  9 05:00:23 np0005478304 kernel: ... fixed-purpose events:   0
Oct  9 05:00:23 np0005478304 kernel: ... event mask:             000000000000003f
Oct  9 05:00:23 np0005478304 kernel: signal: max sigframe size: 3376
Oct  9 05:00:23 np0005478304 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 05:00:23 np0005478304 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 05:00:23 np0005478304 kernel: smp: Bringing up secondary CPUs ...
Oct  9 05:00:23 np0005478304 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 05:00:23 np0005478304 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 05:00:23 np0005478304 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 05:00:23 np0005478304 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 05:00:23 np0005478304 kernel: node 0 deferred pages initialised in 23ms
Oct  9 05:00:23 np0005478304 kernel: Memory: 7767912K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct  9 05:00:23 np0005478304 kernel: devtmpfs: initialized
Oct  9 05:00:23 np0005478304 kernel: x86/mm: Memory block size: 128MB
Oct  9 05:00:23 np0005478304 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 05:00:23 np0005478304 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 05:00:23 np0005478304 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 05:00:23 np0005478304 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 05:00:23 np0005478304 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 05:00:23 np0005478304 kernel: audit: initializing netlink subsys (disabled)
Oct  9 05:00:23 np0005478304 kernel: audit: type=2000 audit(1760000422.733:1): state=initialized audit_enabled=0 res=1
Oct  9 05:00:23 np0005478304 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 05:00:23 np0005478304 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 05:00:23 np0005478304 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 05:00:23 np0005478304 kernel: cpuidle: using governor menu
Oct  9 05:00:23 np0005478304 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 05:00:23 np0005478304 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 05:00:23 np0005478304 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 05:00:23 np0005478304 kernel: PCI: Using configuration type 1 for base access
Oct  9 05:00:23 np0005478304 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 05:00:23 np0005478304 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 05:00:23 np0005478304 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 05:00:23 np0005478304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 05:00:23 np0005478304 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 05:00:23 np0005478304 kernel: Demotion targets for Node 0: null
Oct  9 05:00:23 np0005478304 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 05:00:23 np0005478304 kernel: ACPI: Added _OSI(Module Device)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 05:00:23 np0005478304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 05:00:23 np0005478304 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 05:00:23 np0005478304 kernel: ACPI: Interpreter enabled
Oct  9 05:00:23 np0005478304 kernel: ACPI: PM: (supports S0 S5)
Oct  9 05:00:23 np0005478304 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 05:00:23 np0005478304 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 05:00:23 np0005478304 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 05:00:23 np0005478304 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 05:00:23 np0005478304 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 05:00:23 np0005478304 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 05:00:23 np0005478304 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 05:00:23 np0005478304 kernel: PCI host bridge to bus 0000:00
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [1] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [2] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [3] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [4] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [5] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [6] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [7] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [8] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [9] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [10] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [11] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [12] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [13] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [14] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [15] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [16] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [17] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [18] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [19] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [20] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [21] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [22] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [23] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [24] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [25] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [26] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [27] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [28] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [29] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [30] registered
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [31] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-2] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-3] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-4] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-5] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 05:00:23 np0005478304 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-6] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-7] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-8] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-9] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-10] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-11] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-12] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-13] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-14] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-15] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-16] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:23 np0005478304 kernel: acpiphp: Slot [0-17] registered
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 05:00:23 np0005478304 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 05:00:23 np0005478304 kernel: iommu: Default domain type: Translated
Oct  9 05:00:23 np0005478304 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 05:00:23 np0005478304 kernel: SCSI subsystem initialized
Oct  9 05:00:23 np0005478304 kernel: ACPI: bus type USB registered
Oct  9 05:00:23 np0005478304 kernel: usbcore: registered new interface driver usbfs
Oct  9 05:00:23 np0005478304 kernel: usbcore: registered new interface driver hub
Oct  9 05:00:23 np0005478304 kernel: usbcore: registered new device driver usb
Oct  9 05:00:23 np0005478304 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 05:00:23 np0005478304 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 05:00:23 np0005478304 kernel: PTP clock support registered
Oct  9 05:00:23 np0005478304 kernel: EDAC MC: Ver: 3.0.0
Oct  9 05:00:23 np0005478304 kernel: NetLabel: Initializing
Oct  9 05:00:23 np0005478304 kernel: NetLabel:  domain hash size = 128
Oct  9 05:00:23 np0005478304 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 05:00:23 np0005478304 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 05:00:23 np0005478304 kernel: PCI: Using ACPI for IRQ routing
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 05:00:23 np0005478304 kernel: vgaarb: loaded
Oct  9 05:00:23 np0005478304 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 05:00:23 np0005478304 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 05:00:23 np0005478304 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 05:00:23 np0005478304 kernel: pnp: PnP ACPI init
Oct  9 05:00:23 np0005478304 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 05:00:23 np0005478304 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 05:00:23 np0005478304 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_INET protocol family
Oct  9 05:00:23 np0005478304 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 05:00:23 np0005478304 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_XDP protocol family
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 05:00:23 np0005478304 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 05:00:23 np0005478304 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 05:00:23 np0005478304 kernel: PCI: CLS 0 bytes, default 64
Oct  9 05:00:23 np0005478304 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 05:00:23 np0005478304 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct  9 05:00:23 np0005478304 kernel: ACPI: bus type thunderbolt registered
Oct  9 05:00:23 np0005478304 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 05:00:23 np0005478304 kernel: Initialise system trusted keyrings
Oct  9 05:00:23 np0005478304 kernel: Key type blacklist registered
Oct  9 05:00:23 np0005478304 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 05:00:23 np0005478304 kernel: zbud: loaded
Oct  9 05:00:23 np0005478304 kernel: integrity: Platform Keyring initialized
Oct  9 05:00:23 np0005478304 kernel: integrity: Machine keyring initialized
Oct  9 05:00:23 np0005478304 kernel: Freeing initrd memory: 86104K
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_ALG protocol family
Oct  9 05:00:23 np0005478304 kernel: xor: automatically using best checksumming function   avx       
Oct  9 05:00:23 np0005478304 kernel: Key type asymmetric registered
Oct  9 05:00:23 np0005478304 kernel: Asymmetric key parser 'x509' registered
Oct  9 05:00:23 np0005478304 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 05:00:23 np0005478304 kernel: io scheduler mq-deadline registered
Oct  9 05:00:23 np0005478304 kernel: io scheduler kyber registered
Oct  9 05:00:23 np0005478304 kernel: io scheduler bfq registered
Oct  9 05:00:23 np0005478304 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 05:00:23 np0005478304 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 05:00:23 np0005478304 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 05:00:23 np0005478304 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 05:00:23 np0005478304 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 05:00:23 np0005478304 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 05:00:23 np0005478304 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 05:00:23 np0005478304 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 05:00:23 np0005478304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 05:00:23 np0005478304 kernel: ACPI: button: Power Button [PWRF]
Oct  9 05:00:23 np0005478304 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 05:00:23 np0005478304 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 05:00:23 np0005478304 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 05:00:23 np0005478304 kernel: Non-volatile memory driver v1.3
Oct  9 05:00:23 np0005478304 kernel: rdac: device handler registered
Oct  9 05:00:23 np0005478304 kernel: hp_sw: device handler registered
Oct  9 05:00:23 np0005478304 kernel: emc: device handler registered
Oct  9 05:00:23 np0005478304 kernel: alua: device handler registered
Oct  9 05:00:23 np0005478304 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 05:00:23 np0005478304 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 05:00:23 np0005478304 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 05:00:23 np0005478304 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 05:00:23 np0005478304 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 05:00:23 np0005478304 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 05:00:23 np0005478304 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 05:00:23 np0005478304 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 05:00:23 np0005478304 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 05:00:23 np0005478304 kernel: hub 1-0:1.0: USB hub found
Oct  9 05:00:23 np0005478304 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 05:00:23 np0005478304 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 05:00:23 np0005478304 kernel: usbserial: USB Serial support registered for generic
Oct  9 05:00:23 np0005478304 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 05:00:23 np0005478304 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 05:00:23 np0005478304 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 05:00:23 np0005478304 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 05:00:23 np0005478304 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 05:00:23 np0005478304 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 05:00:23 np0005478304 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 05:00:23 np0005478304 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 05:00:23 np0005478304 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 05:00:23 np0005478304 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:00:23 UTC (1760000423)
Oct  9 05:00:23 np0005478304 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 05:00:23 np0005478304 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 05:00:23 np0005478304 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 05:00:23 np0005478304 kernel: usbcore: registered new interface driver usbhid
Oct  9 05:00:23 np0005478304 kernel: usbhid: USB HID core driver
Oct  9 05:00:23 np0005478304 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 05:00:23 np0005478304 kernel: Initializing XFRM netlink socket
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_INET6 protocol family
Oct  9 05:00:23 np0005478304 kernel: Segment Routing with IPv6
Oct  9 05:00:23 np0005478304 kernel: NET: Registered PF_PACKET protocol family
Oct  9 05:00:23 np0005478304 kernel: mpls_gso: MPLS GSO support
Oct  9 05:00:23 np0005478304 kernel: IPI shorthand broadcast: enabled
Oct  9 05:00:23 np0005478304 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 05:00:23 np0005478304 kernel: AES CTR mode by8 optimization enabled
Oct  9 05:00:23 np0005478304 kernel: sched_clock: Marking stable (1084002084, 142155717)->(1316969753, -90811952)
Oct  9 05:00:23 np0005478304 kernel: registered taskstats version 1
Oct  9 05:00:23 np0005478304 kernel: Loading compiled-in X.509 certificates
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 05:00:23 np0005478304 kernel: Demotion targets for Node 0: null
Oct  9 05:00:23 np0005478304 kernel: page_owner is disabled
Oct  9 05:00:23 np0005478304 kernel: Key type .fscrypt registered
Oct  9 05:00:23 np0005478304 kernel: Key type fscrypt-provisioning registered
Oct  9 05:00:23 np0005478304 kernel: Key type big_key registered
Oct  9 05:00:23 np0005478304 kernel: Key type encrypted registered
Oct  9 05:00:23 np0005478304 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 05:00:23 np0005478304 kernel: Loading compiled-in module X.509 certificates
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 05:00:23 np0005478304 kernel: ima: Allocated hash algorithm: sha256
Oct  9 05:00:23 np0005478304 kernel: ima: No architecture policies found
Oct  9 05:00:23 np0005478304 kernel: evm: Initialising EVM extended attributes:
Oct  9 05:00:23 np0005478304 kernel: evm: security.selinux
Oct  9 05:00:23 np0005478304 kernel: evm: security.SMACK64 (disabled)
Oct  9 05:00:23 np0005478304 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 05:00:23 np0005478304 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 05:00:23 np0005478304 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 05:00:23 np0005478304 kernel: evm: security.apparmor (disabled)
Oct  9 05:00:23 np0005478304 kernel: evm: security.ima
Oct  9 05:00:23 np0005478304 kernel: evm: security.capability
Oct  9 05:00:23 np0005478304 kernel: evm: HMAC attrs: 0x1
Oct  9 05:00:23 np0005478304 kernel: Running certificate verification RSA selftest
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 05:00:23 np0005478304 kernel: Running certificate verification ECDSA selftest
Oct  9 05:00:23 np0005478304 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 05:00:23 np0005478304 kernel: clk: Disabling unused clocks
Oct  9 05:00:23 np0005478304 kernel: Freeing unused decrypted memory: 2028K
Oct  9 05:00:23 np0005478304 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 05:00:23 np0005478304 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 05:00:23 np0005478304 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 05:00:23 np0005478304 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 05:00:23 np0005478304 kernel: Run /init as init process
Oct  9 05:00:23 np0005478304 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:23 np0005478304 systemd: Detected virtualization kvm.
Oct  9 05:00:23 np0005478304 systemd: Detected architecture x86-64.
Oct  9 05:00:23 np0005478304 systemd: Running in initrd.
Oct  9 05:00:23 np0005478304 systemd: No hostname configured, using default hostname.
Oct  9 05:00:23 np0005478304 systemd: Hostname set to <localhost>.
Oct  9 05:00:23 np0005478304 systemd: Initializing machine ID from VM UUID.
Oct  9 05:00:23 np0005478304 systemd: Queued start job for default target Initrd Default Target.
Oct  9 05:00:23 np0005478304 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:23 np0005478304 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:23 np0005478304 systemd: Reached target Initrd /usr File System.
Oct  9 05:00:23 np0005478304 systemd: Reached target Local File Systems.
Oct  9 05:00:23 np0005478304 systemd: Reached target Path Units.
Oct  9 05:00:23 np0005478304 systemd: Reached target Slice Units.
Oct  9 05:00:23 np0005478304 systemd: Reached target Swaps.
Oct  9 05:00:23 np0005478304 systemd: Reached target Timer Units.
Oct  9 05:00:23 np0005478304 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:23 np0005478304 systemd: Listening on Journal Socket (/dev/log).
Oct  9 05:00:23 np0005478304 systemd: Listening on Journal Socket.
Oct  9 05:00:23 np0005478304 systemd: Listening on udev Control Socket.
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 05:00:23 np0005478304 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 05:00:23 np0005478304 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:23 np0005478304 systemd: Reached target Socket Units.
Oct  9 05:00:23 np0005478304 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 05:00:23 np0005478304 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 05:00:23 np0005478304 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:23 np0005478304 systemd: Starting Journal Service...
Oct  9 05:00:23 np0005478304 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:23 np0005478304 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:23 np0005478304 systemd: Starting Create System Users...
Oct  9 05:00:23 np0005478304 systemd: Starting Setup Virtual Console...
Oct  9 05:00:23 np0005478304 systemd: Finished Create List of Static Device Nodes.
Oct  9 05:00:23 np0005478304 systemd: Finished Apply Kernel Variables.
Oct  9 05:00:23 np0005478304 systemd: Finished Create System Users.
Oct  9 05:00:23 np0005478304 systemd-journald[284]: Journal started
Oct  9 05:00:23 np0005478304 systemd-journald[284]: Runtime Journal (/run/log/journal/ed71292475ec452aa842ae61b9b9ed0c) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:23 np0005478304 systemd-sysusers[288]: Creating group 'users' with GID 100.
Oct  9 05:00:23 np0005478304 systemd-sysusers[288]: Creating group 'dbus' with GID 81.
Oct  9 05:00:23 np0005478304 systemd-sysusers[288]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 05:00:23 np0005478304 systemd: Started Journal Service.
Oct  9 05:00:23 np0005478304 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:23 np0005478304 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:24 np0005478304 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:24 np0005478304 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:24 np0005478304 systemd[1]: Finished Setup Virtual Console.
Oct  9 05:00:24 np0005478304 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting dracut cmdline hook...
Oct  9 05:00:24 np0005478304 dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 05:00:24 np0005478304 dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 05:00:24 np0005478304 systemd[1]: Finished dracut cmdline hook.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting dracut pre-udev hook...
Oct  9 05:00:24 np0005478304 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 05:00:24 np0005478304 kernel: device-mapper: uevent: version 1.0.3
Oct  9 05:00:24 np0005478304 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 05:00:24 np0005478304 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 05:00:24 np0005478304 kernel: RPC: Registered udp transport module.
Oct  9 05:00:24 np0005478304 kernel: RPC: Registered tcp transport module.
Oct  9 05:00:24 np0005478304 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 05:00:24 np0005478304 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 05:00:24 np0005478304 rpc.statd[415]: Version 2.5.4 starting
Oct  9 05:00:24 np0005478304 rpc.statd[415]: Initializing NSM state
Oct  9 05:00:24 np0005478304 rpc.idmapd[420]: Setting log level to 0
Oct  9 05:00:24 np0005478304 systemd[1]: Finished dracut pre-udev hook.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:24 np0005478304 systemd-udevd[433]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:24 np0005478304 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 05:00:24 np0005478304 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 05:00:24 np0005478304 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 05:00:24 np0005478304 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:24 np0005478304 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:24 np0005478304 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:24 np0005478304 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:24 np0005478304 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:24 np0005478304 systemd[1]: Reached target Network.
Oct  9 05:00:24 np0005478304 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 05:00:24 np0005478304 systemd[1]: Starting dracut initqueue hook...
Oct  9 05:00:24 np0005478304 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 05:00:24 np0005478304 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 05:00:24 np0005478304 kernel: vda: vda1
Oct  9 05:00:24 np0005478304 systemd-udevd[449]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:24 np0005478304 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:24 np0005478304 systemd[1]: Reached target Initrd Root Device.
Oct  9 05:00:24 np0005478304 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 05:00:24 np0005478304 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 05:00:24 np0005478304 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 05:00:24 np0005478304 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 05:00:24 np0005478304 kernel: scsi host0: ahci
Oct  9 05:00:24 np0005478304 kernel: scsi host1: ahci
Oct  9 05:00:24 np0005478304 kernel: scsi host2: ahci
Oct  9 05:00:24 np0005478304 kernel: scsi host3: ahci
Oct  9 05:00:24 np0005478304 kernel: scsi host4: ahci
Oct  9 05:00:24 np0005478304 kernel: scsi host5: ahci
Oct  9 05:00:24 np0005478304 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 49 lpm-pol 0
Oct  9 05:00:24 np0005478304 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 05:00:24 np0005478304 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 05:00:24 np0005478304 kernel: ata1.00: applying bridge limits
Oct  9 05:00:24 np0005478304 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 05:00:24 np0005478304 kernel: ata1.00: configured for UDMA/100
Oct  9 05:00:24 np0005478304 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 05:00:24 np0005478304 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 05:00:24 np0005478304 systemd[1]: Reached target System Initialization.
Oct  9 05:00:24 np0005478304 systemd[1]: Reached target Basic System.
Oct  9 05:00:24 np0005478304 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 05:00:24 np0005478304 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 05:00:24 np0005478304 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 05:00:25 np0005478304 systemd[1]: Finished dracut initqueue hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting dracut pre-mount hook...
Oct  9 05:00:25 np0005478304 systemd[1]: Finished dracut pre-mount hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 05:00:25 np0005478304 systemd-fsck[529]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 05:00:25 np0005478304 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 05:00:25 np0005478304 systemd[1]: Mounting /sysroot...
Oct  9 05:00:25 np0005478304 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 05:00:25 np0005478304 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 05:00:25 np0005478304 kernel: XFS (vda1): Ending clean mount
Oct  9 05:00:25 np0005478304 systemd[1]: Mounted /sysroot.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Initrd Root File System.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 05:00:25 np0005478304 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Initrd File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Initrd Default Target.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting dracut mount hook...
Oct  9 05:00:25 np0005478304 systemd[1]: Finished dracut mount hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 05:00:25 np0005478304 rpc.idmapd[420]: exiting on signal 15
Oct  9 05:00:25 np0005478304 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Network.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Timer Units.
Oct  9 05:00:25 np0005478304 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Initrd Default Target.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Basic System.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Initrd Root Device.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Path Units.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Remote File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Slice Units.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Socket Units.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target System Initialization.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Local File Systems.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Swaps.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut mount hook.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut initqueue hook.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Setup Virtual Console.
Oct  9 05:00:25 np0005478304 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Closed udev Control Socket.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Closed udev Kernel Socket.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 05:00:25 np0005478304 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped dracut cmdline hook.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting Cleanup udev Database...
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 05:00:25 np0005478304 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 05:00:25 np0005478304 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Stopped Create System Users.
Oct  9 05:00:25 np0005478304 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 05:00:25 np0005478304 systemd[1]: Finished Cleanup udev Database.
Oct  9 05:00:25 np0005478304 systemd[1]: Reached target Switch Root.
Oct  9 05:00:25 np0005478304 systemd[1]: Starting Switch Root...
Oct  9 05:00:25 np0005478304 systemd[1]: Switching root.
Oct  9 05:00:25 np0005478304 systemd-journald[284]: Journal stopped
Oct  9 05:00:26 np0005478304 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  9 05:00:26 np0005478304 kernel: audit: type=1404 audit(1760000425.762:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:00:26 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:00:26 np0005478304 kernel: audit: type=1403 audit(1760000425.875:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 05:00:26 np0005478304 systemd: Successfully loaded SELinux policy in 116.240ms.
Oct  9 05:00:26 np0005478304 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.326ms.
Oct  9 05:00:26 np0005478304 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:00:26 np0005478304 systemd: Detected virtualization kvm.
Oct  9 05:00:26 np0005478304 systemd: Detected architecture x86-64.
Oct  9 05:00:26 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:00:26 np0005478304 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd: Stopped Switch Root.
Oct  9 05:00:26 np0005478304 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 05:00:26 np0005478304 systemd: Created slice Slice /system/getty.
Oct  9 05:00:26 np0005478304 systemd: Created slice Slice /system/serial-getty.
Oct  9 05:00:26 np0005478304 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 05:00:26 np0005478304 systemd: Created slice User and Session Slice.
Oct  9 05:00:26 np0005478304 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 05:00:26 np0005478304 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 05:00:26 np0005478304 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 05:00:26 np0005478304 systemd: Reached target Local Encrypted Volumes.
Oct  9 05:00:26 np0005478304 systemd: Stopped target Switch Root.
Oct  9 05:00:26 np0005478304 systemd: Stopped target Initrd File Systems.
Oct  9 05:00:26 np0005478304 systemd: Stopped target Initrd Root File System.
Oct  9 05:00:26 np0005478304 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 05:00:26 np0005478304 systemd: Reached target Path Units.
Oct  9 05:00:26 np0005478304 systemd: Reached target rpc_pipefs.target.
Oct  9 05:00:26 np0005478304 systemd: Reached target Slice Units.
Oct  9 05:00:26 np0005478304 systemd: Reached target Swaps.
Oct  9 05:00:26 np0005478304 systemd: Reached target Local Verity Protected Volumes.
Oct  9 05:00:26 np0005478304 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 05:00:26 np0005478304 systemd: Reached target RPC Port Mapper.
Oct  9 05:00:26 np0005478304 systemd: Listening on Process Core Dump Socket.
Oct  9 05:00:26 np0005478304 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 05:00:26 np0005478304 systemd: Listening on udev Control Socket.
Oct  9 05:00:26 np0005478304 systemd: Listening on udev Kernel Socket.
Oct  9 05:00:26 np0005478304 systemd: Mounting Huge Pages File System...
Oct  9 05:00:26 np0005478304 systemd: Mounting POSIX Message Queue File System...
Oct  9 05:00:26 np0005478304 systemd: Mounting Kernel Debug File System...
Oct  9 05:00:26 np0005478304 systemd: Mounting Kernel Trace File System...
Oct  9 05:00:26 np0005478304 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:26 np0005478304 systemd: Starting Create List of Static Device Nodes...
Oct  9 05:00:26 np0005478304 systemd: Starting Load Kernel Module configfs...
Oct  9 05:00:26 np0005478304 systemd: Starting Load Kernel Module drm...
Oct  9 05:00:26 np0005478304 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 05:00:26 np0005478304 systemd: Starting Load Kernel Module fuse...
Oct  9 05:00:26 np0005478304 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 05:00:26 np0005478304 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd: Stopped File System Check on Root Device.
Oct  9 05:00:26 np0005478304 systemd: Stopped Journal Service.
Oct  9 05:00:26 np0005478304 systemd: Starting Journal Service...
Oct  9 05:00:26 np0005478304 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 05:00:26 np0005478304 systemd: Starting Generate network units from Kernel command line...
Oct  9 05:00:26 np0005478304 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:26 np0005478304 kernel: fuse: init (API version 7.37)
Oct  9 05:00:26 np0005478304 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 05:00:26 np0005478304 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 05:00:26 np0005478304 systemd: Starting Apply Kernel Variables...
Oct  9 05:00:26 np0005478304 systemd: Starting Coldplug All udev Devices...
Oct  9 05:00:26 np0005478304 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 05:00:26 np0005478304 systemd: Mounted Huge Pages File System.
Oct  9 05:00:26 np0005478304 systemd: Mounted POSIX Message Queue File System.
Oct  9 05:00:26 np0005478304 systemd: Mounted Kernel Debug File System.
Oct  9 05:00:26 np0005478304 systemd: Mounted Kernel Trace File System.
Oct  9 05:00:26 np0005478304 systemd: Finished Create List of Static Device Nodes.
Oct  9 05:00:26 np0005478304 kernel: ACPI: bus type drm_connector registered
Oct  9 05:00:26 np0005478304 systemd: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd: Finished Load Kernel Module configfs.
Oct  9 05:00:26 np0005478304 systemd: modprobe@drm.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd: Finished Load Kernel Module drm.
Oct  9 05:00:26 np0005478304 systemd-journald[650]: Journal started
Oct  9 05:00:26 np0005478304 systemd-journald[650]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:26 np0005478304 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 05:00:26 np0005478304 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd: Started Journal Service.
Oct  9 05:00:26 np0005478304 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 05:00:26 np0005478304 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:00:26 np0005478304 systemd[1]: Mounting FUSE Control File System...
Oct  9 05:00:26 np0005478304 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Rebuild Hardware Database...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 05:00:26 np0005478304 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 05:00:26 np0005478304 systemd-journald[650]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:00:26 np0005478304 systemd-journald[650]: Received client request to flush runtime journal.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Create System Users...
Oct  9 05:00:26 np0005478304 systemd[1]: Mounted FUSE Control File System.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 05:00:26 np0005478304 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Create System Users.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target Local File Systems.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  9 05:00:26 np0005478304 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 05:00:26 np0005478304 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 05:00:26 np0005478304 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 05:00:26 np0005478304 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 05:00:26 np0005478304 bootctl[667]: Couldn't find EFI system partition, skipping.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Security Auditing Service...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting RPC Bind...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Rebuild Journal Catalog...
Oct  9 05:00:26 np0005478304 auditd[673]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 05:00:26 np0005478304 auditd[673]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 05:00:26 np0005478304 systemd[1]: Started RPC Bind.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Rebuild Journal Catalog.
Oct  9 05:00:26 np0005478304 augenrules[678]: /sbin/augenrules: No change
Oct  9 05:00:26 np0005478304 augenrules[693]: No rules
Oct  9 05:00:26 np0005478304 augenrules[693]: enabled 1
Oct  9 05:00:26 np0005478304 augenrules[693]: failure 1
Oct  9 05:00:26 np0005478304 augenrules[693]: pid 673
Oct  9 05:00:26 np0005478304 augenrules[693]: rate_limit 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_limit 8192
Oct  9 05:00:26 np0005478304 augenrules[693]: lost 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog 2
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:26 np0005478304 augenrules[693]: enabled 1
Oct  9 05:00:26 np0005478304 augenrules[693]: failure 1
Oct  9 05:00:26 np0005478304 augenrules[693]: pid 673
Oct  9 05:00:26 np0005478304 augenrules[693]: rate_limit 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_limit 8192
Oct  9 05:00:26 np0005478304 augenrules[693]: lost 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog 3
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:26 np0005478304 augenrules[693]: enabled 1
Oct  9 05:00:26 np0005478304 augenrules[693]: failure 1
Oct  9 05:00:26 np0005478304 augenrules[693]: pid 673
Oct  9 05:00:26 np0005478304 augenrules[693]: rate_limit 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_limit 8192
Oct  9 05:00:26 np0005478304 augenrules[693]: lost 0
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog 4
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time 60000
Oct  9 05:00:26 np0005478304 augenrules[693]: backlog_wait_time_actual 0
Oct  9 05:00:26 np0005478304 systemd[1]: Started Security Auditing Service.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Rebuild Hardware Database.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Update is Completed...
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Update is Completed.
Oct  9 05:00:26 np0005478304 systemd-udevd[701]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:00:26 np0005478304 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target System Initialization.
Oct  9 05:00:26 np0005478304 systemd[1]: Started dnf makecache --timer.
Oct  9 05:00:26 np0005478304 systemd[1]: Started Daily rotation of log files.
Oct  9 05:00:26 np0005478304 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target Timer Units.
Oct  9 05:00:26 np0005478304 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 05:00:26 np0005478304 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target Socket Units.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 05:00:26 np0005478304 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 05:00:26 np0005478304 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 05:00:26 np0005478304 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 05:00:26 np0005478304 systemd[1]: Started D-Bus System Message Bus.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target Basic System.
Oct  9 05:00:26 np0005478304 systemd-udevd[714]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:00:26 np0005478304 dbus-broker-lau[711]: Ready
Oct  9 05:00:26 np0005478304 systemd[1]: Starting NTP client/server...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 05:00:26 np0005478304 systemd[1]: Starting IPv4 firewall with iptables...
Oct  9 05:00:26 np0005478304 systemd[1]: Started irqbalance daemon.
Oct  9 05:00:26 np0005478304 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 05:00:26 np0005478304 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:26 np0005478304 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:26 np0005478304 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target sshd-keygen.target.
Oct  9 05:00:26 np0005478304 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 05:00:26 np0005478304 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 05:00:26 np0005478304 systemd[1]: Starting User Login Management...
Oct  9 05:00:26 np0005478304 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 05:00:26 np0005478304 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 05:00:26 np0005478304 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 05:00:26 np0005478304 chronyd[756]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:00:26 np0005478304 chronyd[756]: Loaded 0 symmetric keys
Oct  9 05:00:26 np0005478304 systemd-logind[743]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 05:00:26 np0005478304 chronyd[756]: Using right/UTC timezone to obtain leap second data
Oct  9 05:00:26 np0005478304 systemd-logind[743]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 05:00:26 np0005478304 chronyd[756]: Loaded seccomp filter (level 2)
Oct  9 05:00:26 np0005478304 systemd[1]: Started NTP client/server.
Oct  9 05:00:26 np0005478304 systemd-logind[743]: New seat seat0.
Oct  9 05:00:26 np0005478304 systemd[1]: Started User Login Management.
Oct  9 05:00:27 np0005478304 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 05:00:27 np0005478304 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 05:00:27 np0005478304 iptables.init[737]: iptables: Applying firewall rules: [  OK  ]
Oct  9 05:00:27 np0005478304 systemd[1]: Finished IPv4 firewall with iptables.
Oct  9 05:00:27 np0005478304 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 05:00:27 np0005478304 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 05:00:27 np0005478304 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 05:00:27 np0005478304 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 05:00:27 np0005478304 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 05:00:27 np0005478304 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 05:00:27 np0005478304 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 05:00:27 np0005478304 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 05:00:27 np0005478304 kernel: Console: switching to colour dummy device 80x25
Oct  9 05:00:27 np0005478304 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 05:00:27 np0005478304 kernel: [drm] features: -context_init
Oct  9 05:00:27 np0005478304 kernel: [drm] number of scanouts: 1
Oct  9 05:00:27 np0005478304 kernel: [drm] number of cap sets: 0
Oct  9 05:00:27 np0005478304 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 05:00:27 np0005478304 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 05:00:27 np0005478304 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 05:00:27 np0005478304 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: TSC scaling supported
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: Nested Paging enabled
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: LBR virtualization supported
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 05:00:27 np0005478304 kernel: kvm_amd: Virtual GIF supported
Oct  9 05:00:27 np0005478304 cloud-init[793]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:00:27 +0000. Up 4.95 seconds.
Oct  9 05:00:27 np0005478304 systemd[1]: run-cloud\x2dinit-tmp-tmpzzelwwcx.mount: Deactivated successfully.
Oct  9 05:00:27 np0005478304 systemd[1]: Starting Hostname Service...
Oct  9 05:00:27 np0005478304 systemd[1]: Started Hostname Service.
Oct  9 05:00:27 np0005478304 systemd-hostnamed[807]: Hostname set to <np0005478304> (static)
Oct  9 05:00:27 np0005478304 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 05:00:27 np0005478304 systemd[1]: Reached target Preparation for Network.
Oct  9 05:00:27 np0005478304 systemd[1]: Starting Network Manager...
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8073] NetworkManager (version 1.54.1-1.el9) is starting... (boot:f4377bae-7107-4315-9822-dc318aaac0ab)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8075] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8156] manager[0x561769363080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8185] hostname: hostname: using hostnamed
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8185] hostname: static hostname changed from (none) to "np0005478304"
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8188] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8275] manager[0x561769363080]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8275] manager[0x561769363080]: rfkill: WWAN hardware radio set enabled
Oct  9 05:00:27 np0005478304 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8334] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8334] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8335] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8335] manager: Networking is enabled by state file
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8336] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8356] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8372] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8391] dhcp: init: Using DHCP client 'internal'
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8393] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8403] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8412] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8417] device (lo): Activation: starting connection 'lo' (c4b4942b-b288-4887-b67b-02123977123c)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8424] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8426] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8446] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8449] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8450] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8452] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8453] device (eth0): carrier: link connected
Oct  9 05:00:27 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8465] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 systemd[1]: Started Network Manager.
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8476] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:00:27 np0005478304 systemd[1]: Reached target Network.
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8481] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8491] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8492] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8493] manager: NetworkManager state is now CONNECTING
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8494] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8499] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:00:27 np0005478304 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8503] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8518] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:00:27 np0005478304 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8543] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8549] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:00:27 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8603] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8605] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:00:27 np0005478304 NetworkManager[811]: <info>  [1760000427.8609] device (lo): Activation: successful, device activated.
Oct  9 05:00:27 np0005478304 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 05:00:27 np0005478304 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 05:00:27 np0005478304 systemd[1]: Reached target NFS client services.
Oct  9 05:00:27 np0005478304 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 05:00:27 np0005478304 systemd[1]: Reached target Remote File Systems.
Oct  9 05:00:27 np0005478304 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 05:00:29 np0005478304 NetworkManager[811]: <info>  [1760000429.1746] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:00:30 np0005478304 NetworkManager[811]: <info>  [1760000430.2803] dhcp6 (eth0): state changed new lease, address=2001:db8::372
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4154] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4198] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4199] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4202] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4205] device (eth0): Activation: successful, device activated.
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4218] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:00:31 np0005478304 NetworkManager[811]: <info>  [1760000431.4222] manager: startup complete
Oct  9 05:00:31 np0005478304 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:00:31 np0005478304 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 05:00:31 np0005478304 cloud-init[877]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:00:31 +0000. Up 9.23 seconds.
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |  eth0  | True |        192.168.26.193        | 255.255.255.0 | global | fa:16:3e:49:30:79 |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |  eth0  | True |      2001:db8::372/128       |       .       | global | fa:16:3e:49:30:79 |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |  eth0  | True | fe80::f816:3eff:fe49:3079/64 |       .       |  link  | fa:16:3e:49:30:79 |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   2   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: ++++++++++++++++++++++Route IPv6 info++++++++++++++++++++++
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: | Route |  Destination  |   Gateway   | Interface | Flags |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   1   |  2001:db8::1  |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   2   | 2001:db8::372 |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   3   |   fe80::/64   |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   4   |      ::/0     | 2001:db8::1 |    eth0   |   UG  |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   6   |     local     |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   7   |     local     |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: |   8   |   multicast   |      ::     |    eth0   |   U   |
Oct  9 05:00:31 np0005478304 cloud-init[877]: ci-info: +-------+---------------+-------------+-----------+-------+
Oct  9 05:00:32 np0005478304 cloud-init[877]: Generating public/private rsa key pair.
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: SHA256:UlLgCvR+JDK2D50Y1ssXvGrUvaMX6CFf0AdRe3PDSCo root@np0005478304
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: +---[RSA 3072]----+
Oct  9 05:00:32 np0005478304 cloud-init[877]: |  .   ..oo. .    |
Oct  9 05:00:32 np0005478304 cloud-init[877]: | . o o ..  + o   |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |  B + *.E.o + +  |
Oct  9 05:00:32 np0005478304 cloud-init[877]: | o @ B.*.... o . |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |  + O *oS.       |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |   +.+o.o.       |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |    ++ oo.       |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |   .  o...       |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |      ..         |
Oct  9 05:00:32 np0005478304 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:32 np0005478304 cloud-init[877]: Generating public/private ecdsa key pair.
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: SHA256:c6thdQh6Bl38s0LBodThUjP6npFhugpwwJMMtGrEa/4 root@np0005478304
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: +---[ECDSA 256]---+
Oct  9 05:00:32 np0005478304 cloud-init[877]: |o.      .+Bo     |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |.+..   o *=o     |
Oct  9 05:00:32 np0005478304 cloud-init[877]: | +*   . * +o     |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |o .o   o *.+o    |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |.+. . . S.B .o   |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |+  o   o *.=.    |
Oct  9 05:00:32 np0005478304 cloud-init[877]: | .  .   + +.     |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |  .  . o o       |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |   E  . .        |
Oct  9 05:00:32 np0005478304 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:32 np0005478304 cloud-init[877]: Generating public/private ed25519 key pair.
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  9 05:00:32 np0005478304 cloud-init[877]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key fingerprint is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: SHA256:YOmDaajMdAXcAhQxDKlCdiOQgvwNmn+uq0VVisqvS4E root@np0005478304
Oct  9 05:00:32 np0005478304 cloud-init[877]: The key's randomart image is:
Oct  9 05:00:32 np0005478304 cloud-init[877]: +--[ED25519 256]--+
Oct  9 05:00:32 np0005478304 cloud-init[877]: |XO=..  .         |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |=*.*o.o.         |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |= *.=++          |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |=o.oo* .         |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |E=oo+ o S        |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |+.=o . .         |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |.+ oo            |
Oct  9 05:00:32 np0005478304 cloud-init[877]: |. o  .           |
Oct  9 05:00:32 np0005478304 cloud-init[877]: | +ooo            |
Oct  9 05:00:32 np0005478304 cloud-init[877]: +----[SHA256]-----+
Oct  9 05:00:32 np0005478304 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 05:00:32 np0005478304 systemd[1]: Reached target Cloud-config availability.
Oct  9 05:00:32 np0005478304 systemd[1]: Reached target Network is Online.
Oct  9 05:00:32 np0005478304 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 05:00:32 np0005478304 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 05:00:32 np0005478304 systemd[1]: Starting System Logging Service...
Oct  9 05:00:32 np0005478304 sm-notify[960]: Version 2.5.4 starting
Oct  9 05:00:32 np0005478304 systemd[1]: Starting OpenSSH server daemon...
Oct  9 05:00:32 np0005478304 systemd[1]: Starting Permit User Sessions...
Oct  9 05:00:32 np0005478304 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 05:00:32 np0005478304 systemd[1]: Started OpenSSH server daemon.
Oct  9 05:00:32 np0005478304 systemd[1]: Finished Permit User Sessions.
Oct  9 05:00:32 np0005478304 systemd[1]: Started Command Scheduler.
Oct  9 05:00:32 np0005478304 systemd[1]: Started Getty on tty1.
Oct  9 05:00:32 np0005478304 rsyslogd[961]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="961" x-info="https://www.rsyslog.com"] start
Oct  9 05:00:32 np0005478304 rsyslogd[961]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  9 05:00:32 np0005478304 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 05:00:32 np0005478304 systemd[1]: Reached target Login Prompts.
Oct  9 05:00:32 np0005478304 systemd[1]: Started System Logging Service.
Oct  9 05:00:32 np0005478304 systemd[1]: Reached target Multi-User System.
Oct  9 05:00:32 np0005478304 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 05:00:32 np0005478304 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 05:00:32 np0005478304 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 05:00:32 np0005478304 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:00:32 np0005478304 chronyd[756]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct  9 05:00:32 np0005478304 chronyd[756]: System clock TAI offset set to 37 seconds
Oct  9 05:00:32 np0005478304 cloud-init[973]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:00:32 +0000. Up 10.42 seconds.
Oct  9 05:00:32 np0005478304 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 05:00:32 np0005478304 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 05:00:33 np0005478304 cloud-init[995]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:00:33 +0000. Up 10.78 seconds.
Oct  9 05:00:33 np0005478304 cloud-init[997]: #############################################################
Oct  9 05:00:33 np0005478304 cloud-init[998]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:33 np0005478304 cloud-init[1000]: 256 SHA256:c6thdQh6Bl38s0LBodThUjP6npFhugpwwJMMtGrEa/4 root@np0005478304 (ECDSA)
Oct  9 05:00:33 np0005478304 cloud-init[1002]: 256 SHA256:YOmDaajMdAXcAhQxDKlCdiOQgvwNmn+uq0VVisqvS4E root@np0005478304 (ED25519)
Oct  9 05:00:33 np0005478304 cloud-init[1004]: 3072 SHA256:UlLgCvR+JDK2D50Y1ssXvGrUvaMX6CFf0AdRe3PDSCo root@np0005478304 (RSA)
Oct  9 05:00:33 np0005478304 cloud-init[1005]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  9 05:00:33 np0005478304 cloud-init[1006]: #############################################################
Oct  9 05:00:33 np0005478304 cloud-init[995]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:00:33 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.92 seconds
Oct  9 05:00:33 np0005478304 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 05:00:33 np0005478304 systemd[1]: Reached target Cloud-init target.
Oct  9 05:00:33 np0005478304 systemd[1]: Startup finished in 1.329s (kernel) + 1.987s (initrd) + 7.659s (userspace) = 10.977s.
Oct  9 05:00:37 np0005478304 irqbalance[738]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 05:00:37 np0005478304 irqbalance[738]: IRQ 45 affinity is now unmanaged
Oct  9 05:00:37 np0005478304 irqbalance[738]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 05:00:37 np0005478304 irqbalance[738]: IRQ 44 affinity is now unmanaged
Oct  9 05:00:37 np0005478304 irqbalance[738]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 05:00:37 np0005478304 irqbalance[738]: IRQ 42 affinity is now unmanaged
Oct  9 05:00:41 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:00:57 np0005478304 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:01:02 np0005478304 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 05:01:02 np0005478304 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 05:01:02 np0005478304 systemd-logind[743]: New session 1 of user zuul.
Oct  9 05:01:02 np0005478304 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 05:01:02 np0005478304 systemd[1]: Starting User Manager for UID 1000...
Oct  9 05:01:02 np0005478304 systemd[1031]: Queued start job for default target Main User Target.
Oct  9 05:01:02 np0005478304 systemd[1031]: Created slice User Application Slice.
Oct  9 05:01:02 np0005478304 systemd[1031]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 05:01:02 np0005478304 systemd[1031]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 05:01:02 np0005478304 systemd[1031]: Reached target Paths.
Oct  9 05:01:02 np0005478304 systemd[1031]: Reached target Timers.
Oct  9 05:01:02 np0005478304 systemd[1031]: Starting D-Bus User Message Bus Socket...
Oct  9 05:01:02 np0005478304 systemd[1031]: Starting Create User's Volatile Files and Directories...
Oct  9 05:01:02 np0005478304 systemd[1031]: Finished Create User's Volatile Files and Directories.
Oct  9 05:01:02 np0005478304 systemd[1031]: Listening on D-Bus User Message Bus Socket.
Oct  9 05:01:02 np0005478304 systemd[1031]: Reached target Sockets.
Oct  9 05:01:02 np0005478304 systemd[1031]: Reached target Basic System.
Oct  9 05:01:02 np0005478304 systemd[1031]: Reached target Main User Target.
Oct  9 05:01:02 np0005478304 systemd[1031]: Startup finished in 93ms.
Oct  9 05:01:02 np0005478304 systemd[1]: Started User Manager for UID 1000.
Oct  9 05:01:02 np0005478304 systemd[1]: Started Session 1 of User zuul.
Oct  9 05:01:02 np0005478304 python3[1113]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:06 np0005478304 python3[1141]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:10 np0005478304 python3[1195]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:11 np0005478304 python3[1235]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  9 05:01:13 np0005478304 python3[1261]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJHvXKF+OC4TiCL/aa/o6rq9+SFP7bwIAGJR40fwDShswdP6EsCB3q74rxa7HZk7nAlq9GsqcvEMBnmYvXZUScuzDatbNHHj3L31gOIlnhwqJ+iI2XdTfBbmIf8ccHDrx1xB3Hr6l9Q5eqR06BX9lfG4zf0ZMnKgwxfT7bXERv1O989RrexR2EoG/yjbB1iGKYDIvULj9yB/Lzd91Yva830/7KuOe3mZkeUMPkp7g4dMGF7POukU3bb+UgETc+cweFS+cE2oeZeFxj6d6jKBDkpWNKLJcng32oQUvkUbS53tMgPVCo75ZmBtWas4DZeuhJOIo5dD1eFlOVaBAP+38K/N68/C4UkR/HKomLSssPXAmV6MLWoDu9thuzfr8bgmyZT4hnBveyALdASAffBpfuv8R/2Z6K/F7FIDgew4RyZcKyQjOvsxPqfI+6+Jq4hxxOiGGLQmKsHF+T/crR7fIS8NKaqRy/QwezRy5WD56EvUh4/y9u3fKQK8uVbRdYHb0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:13 np0005478304 python3[1285]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:13 np0005478304 python3[1384]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478304 python3[1455]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000473.699582-254-220828465122187/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa follow=False checksum=c7f5caef86df45fcb47abb858beda9b774bf09c9 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:14 np0005478304 python3[1578]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:14 np0005478304 python3[1649]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000474.326779-309-280111785924781/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=a671a8077fb34b76835f3572668f1b22_id_rsa.pub follow=False checksum=81cf534faaee7eab1d192c4cf78a7f0119953204 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:15 np0005478304 python3[1697]: ansible-ping Invoked with data=pong
Oct  9 05:01:16 np0005478304 python3[1721]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:01:18 np0005478304 python3[1775]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  9 05:01:19 np0005478304 python3[1807]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478304 python3[1831]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478304 python3[1855]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478304 python3[1879]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:19 np0005478304 python3[1903]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:20 np0005478304 python3[1927]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478304 python3[1953]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:21 np0005478304 python3[2031]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:22 np0005478304 python3[2104]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000481.6082754-34-22641040564514/source follow=False _original_basename=mirror_info.sh.j2 checksum=3f92644b791816833989d215b9a84c589a7b8ebd backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:22 np0005478304 python3[2152]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:22 np0005478304 python3[2176]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478304 python3[2200]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478304 python3[2224]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478304 python3[2248]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478304 python3[2272]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:23 np0005478304 python3[2296]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478304 python3[2320]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478304 python3[2344]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478304 python3[2368]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478304 python3[2392]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:24 np0005478304 python3[2416]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478304 python3[2440]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478304 python3[2464]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478304 python3[2488]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478304 python3[2512]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:25 np0005478304 python3[2536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478304 python3[2560]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478304 python3[2584]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478304 python3[2608]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478304 python3[2632]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:26 np0005478304 python3[2656]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478304 python3[2680]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478304 python3[2704]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478304 python3[2728]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:27 np0005478304 python3[2752]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:01:30 np0005478304 python3[2778]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:01:30 np0005478304 systemd[1]: Starting Time & Date Service...
Oct  9 05:01:30 np0005478304 systemd[1]: Started Time & Date Service.
Oct  9 05:01:30 np0005478304 systemd-timedated[2780]: Changed time zone to 'UTC' (UTC).
Oct  9 05:01:30 np0005478304 python3[2809]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:30 np0005478304 python3[2885]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478304 python3[2956]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1760000490.6669607-255-42991187790197/source _original_basename=tmp6alvyrsa follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:31 np0005478304 python3[3056]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:31 np0005478304 python3[3127]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000491.262435-304-262006342713761/source _original_basename=tmp2rjhv2h7 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478304 python3[3229]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:32 np0005478304 python3[3302]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1760000492.1207168-384-14182724623146/source _original_basename=tmpxy9gktjv follow=False checksum=95f4e69a6ffdbd7852cf8f7369f79d5584359881 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:32 np0005478304 python3[3350]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478304 python3[3376]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:33 np0005478304 python3[3456]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:01:33 np0005478304 python3[3529]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000493.2993183-454-237092939005393/source _original_basename=tmp046oum8y follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:34 np0005478304 python3[3580]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e08-49e2-22a3-075b-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:01:34 np0005478304 python3[3608]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e08-49e2-22a3-075b-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  9 05:01:36 np0005478304 python3[3637]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:01:38 np0005478304 chronyd[756]: Selected source 23.150.41.122 (2.centos.pool.ntp.org)
Oct  9 05:01:51 np0005478304 python3[3663]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:02:00 np0005478304 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:02:51 np0005478304 systemd-logind[743]: Session 1 logged out. Waiting for processes to exit.
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: ROM [mem 0x00000000-0x0003ffff pref]
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]: assigned
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]: assigned
Oct  9 05:02:54 np0005478304 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]: assigned
Oct  9 05:02:54 np0005478304 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002)
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5652] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:02:54 np0005478304 systemd-udevd[3666]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5882] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5898] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5901] device (eth1): carrier: link connected
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5902] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5906] policy: auto-activating connection 'Wired connection 1' (f7132f8c-425a-3b32-8a6c-585b88f55c3f)
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5908] device (eth1): Activation: starting connection 'Wired connection 1' (f7132f8c-425a-3b32-8a6c-585b88f55c3f)
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5908] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5910] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5912] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:02:54 np0005478304 NetworkManager[811]: <info>  [1760000574.5915] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:02:55 np0005478304 systemd-logind[743]: New session 3 of user zuul.
Oct  9 05:02:55 np0005478304 systemd[1]: Started Session 3 of User zuul.
Oct  9 05:02:55 np0005478304 python3[3697]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e08-49e2-3fb7-b15f-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:03:04 np0005478304 systemd[1031]: Starting Mark boot as successful...
Oct  9 05:03:04 np0005478304 systemd[1031]: Finished Mark boot as successful.
Oct  9 05:03:05 np0005478304 python3[3778]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:03:05 np0005478304 python3[3851]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760000585.2058945-212-273319214126122/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=c2d0bcf7a5bf9fa847c68308129d6b0da345faee backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:03:05 np0005478304 python3[3901]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:03:06 np0005478304 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:03:06 np0005478304 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:03:06 np0005478304 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:03:06 np0005478304 systemd[1]: Stopping Network Manager...
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0059] caught SIGTERM, shutting down normally.
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0067] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0068] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0068] dhcp4 (eth0): state changed no lease
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0069] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0069] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0069] dhcp6 (eth0): state changed no lease
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0071] manager: NetworkManager state is now CONNECTING
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0141] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0141] dhcp4 (eth1): state changed no lease
Oct  9 05:03:06 np0005478304 NetworkManager[811]: <info>  [1760000586.0175] exiting (success)
Oct  9 05:03:06 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:03:06 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:03:06 np0005478304 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:03:06 np0005478304 systemd[1]: Stopped Network Manager.
Oct  9 05:03:06 np0005478304 systemd[1]: Starting Network Manager...
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.0521] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f4377bae-7107-4315-9822-dc318aaac0ab)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.0522] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.0562] manager[0x55fd1dce0090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:03:06 np0005478304 systemd[1]: Starting Hostname Service...
Oct  9 05:03:06 np0005478304 systemd[1]: Started Hostname Service.
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1171] hostname: hostname: using hostnamed
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1171] hostname: static hostname changed from (none) to "np0005478304"
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1173] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1176] manager[0x55fd1dce0090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1176] manager[0x55fd1dce0090]: rfkill: WWAN hardware radio set enabled
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1197] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1197] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1198] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1198] manager: Networking is enabled by state file
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1199] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1202] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1220] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1226] dhcp: init: Using DHCP client 'internal'
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1228] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1232] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1235] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1241] device (lo): Activation: starting connection 'lo' (c4b4942b-b288-4887-b67b-02123977123c)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1246] device (eth0): carrier: link connected
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1249] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1253] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1254] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1258] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1263] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1267] device (eth1): carrier: link connected
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1270] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1273] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (f7132f8c-425a-3b32-8a6c-585b88f55c3f) (indicated)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1273] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1277] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1281] device (eth1): Activation: starting connection 'Wired connection 1' (f7132f8c-425a-3b32-8a6c-585b88f55c3f)
Oct  9 05:03:06 np0005478304 systemd[1]: Started Network Manager.
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1297] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1300] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1301] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1302] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1303] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1305] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1306] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1307] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1308] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1312] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1313] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1315] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1317] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1321] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1324] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1340] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1344] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1364] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1366] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:03:06 np0005478304 NetworkManager[3910]: <info>  [1760000586.1369] device (lo): Activation: successful, device activated.
Oct  9 05:03:06 np0005478304 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:03:06 np0005478304 python3[3973]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e08-49e2-3fb7-b15f-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1471] dhcp6 (eth0): state changed new lease, address=2001:db8::372
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1478] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1506] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1507] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1509] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1512] device (eth0): Activation: successful, device activated.
Oct  9 05:03:07 np0005478304 NetworkManager[3910]: <info>  [1760000587.1515] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:03:17 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:03:36 np0005478304 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4541] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:03:51 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:03:51 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4764] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4766] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4771] device (eth1): Activation: successful, device activated.
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4774] manager: startup complete
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4776] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <warn>  [1760000631.4779] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4784] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4851] dhcp4 (eth1): canceled DHCP transaction
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4851] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4851] dhcp4 (eth1): state changed no lease
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4860] policy: auto-activating connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4864] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4865] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4867] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4872] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4879] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4904] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4905] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:03:51 np0005478304 NetworkManager[3910]: <info>  [1760000631.4910] device (eth1): Activation: successful, device activated.
Oct  9 05:03:55 np0005478304 python3[4097]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:03:56 np0005478304 python3[4170]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760000635.6693401-379-190813047048495/source _original_basename=tmp_fxvgif5 follow=False checksum=26ebf755fae5a80bfc5f098245c8908b029e5df9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:03:57 np0005478304 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 05:03:57 np0005478304 systemd[1]: session-3.scope: Consumed 1.582s CPU time.
Oct  9 05:03:57 np0005478304 systemd-logind[743]: Session 3 logged out. Waiting for processes to exit.
Oct  9 05:03:57 np0005478304 systemd-logind[743]: Removed session 3.
Oct  9 05:04:01 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:04:52 np0005478304 chronyd[756]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Oct  9 05:06:04 np0005478304 systemd[1031]: Created slice User Background Tasks Slice.
Oct  9 05:06:04 np0005478304 systemd[1031]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 05:06:04 np0005478304 systemd[1031]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 05:08:54 np0005478304 systemd-logind[743]: New session 4 of user zuul.
Oct  9 05:08:54 np0005478304 systemd[1]: Started Session 4 of User zuul.
Oct  9 05:08:54 np0005478304 python3[4229]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001cfc-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:54 np0005478304 python3[4258]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478304 python3[4284]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478304 python3[4310]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:55 np0005478304 python3[4336]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478304 python3[4362]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:08:56 np0005478304 python3[4362]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  9 05:08:56 np0005478304 python3[4388]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 05:08:56 np0005478304 systemd[1]: Reloading.
Oct  9 05:08:56 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:08:58 np0005478304 python3[4444]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  9 05:08:58 np0005478304 python3[4470]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478304 python3[4498]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:58 np0005478304 python3[4526]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478304 python3[4554]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:08:59 np0005478304 python3[4581]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e08-49e2-2dac-3627-000000001d02-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:09:00 np0005478304 python3[4611]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:09:02 np0005478304 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 05:09:02 np0005478304 systemd[1]: session-4.scope: Consumed 2.452s CPU time.
Oct  9 05:09:02 np0005478304 systemd-logind[743]: Session 4 logged out. Waiting for processes to exit.
Oct  9 05:09:02 np0005478304 systemd-logind[743]: Removed session 4.
Oct  9 05:09:04 np0005478304 systemd-logind[743]: New session 5 of user zuul.
Oct  9 05:09:04 np0005478304 systemd[1]: Started Session 5 of User zuul.
Oct  9 05:09:04 np0005478304 python3[4646]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 05:09:50 np0005478304 kernel: SELinux:  Converting 367 SID table entries...
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:50 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  Converting 367 SID table entries...
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:09:56 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  Converting 367 SID table entries...
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:10:03 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:04 np0005478304 setsebool[4735]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  9 05:10:04 np0005478304 setsebool[4735]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  9 05:10:12 np0005478304 kernel: SELinux:  Converting 370 SID table entries...
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:10:12 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:10:24 np0005478304 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:24 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:10:24 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:10:24 np0005478304 systemd[1]: Reloading.
Oct  9 05:10:24 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:10:25 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:10:25 np0005478304 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:10:25 np0005478304 systemd[1]: Starting Authorization Manager...
Oct  9 05:10:25 np0005478304 polkitd[6802]: Started polkitd version 0.117
Oct  9 05:10:25 np0005478304 systemd[1]: Started Authorization Manager.
Oct  9 05:10:25 np0005478304 systemd[1]: Started PackageKit Daemon.
Oct  9 05:10:26 np0005478304 python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e08-49e2-9746-57c4-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:10:27 np0005478304 kernel: evm: overlay not supported
Oct  9 05:10:27 np0005478304 systemd[1031]: Starting D-Bus User Message Bus...
Oct  9 05:10:27 np0005478304 dbus-broker-launch[9259]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478304 dbus-broker-launch[9259]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:10:27 np0005478304 systemd[1031]: Started D-Bus User Message Bus.
Oct  9 05:10:27 np0005478304 dbus-broker-lau[9259]: Ready
Oct  9 05:10:27 np0005478304 systemd[1031]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 05:10:27 np0005478304 systemd[1031]: Created slice Slice /user.
Oct  9 05:10:27 np0005478304 systemd[1031]: podman-9162.scope: unit configures an IP firewall, but not running as root.
Oct  9 05:10:27 np0005478304 systemd[1031]: (This warning is only shown for the first unit using IP firewalling.)
Oct  9 05:10:27 np0005478304 systemd[1031]: Started podman-9162.scope.
Oct  9 05:10:27 np0005478304 systemd[1031]: Started podman-pause-743ddac1.scope.
Oct  9 05:10:28 np0005478304 systemd[1]: session-5.scope: Deactivated successfully.
Oct  9 05:10:28 np0005478304 systemd[1]: session-5.scope: Consumed 50.965s CPU time.
Oct  9 05:10:28 np0005478304 systemd-logind[743]: Session 5 logged out. Waiting for processes to exit.
Oct  9 05:10:28 np0005478304 systemd-logind[743]: Removed session 5.
Oct  9 05:10:48 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:10:48 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:10:48 np0005478304 systemd[1]: man-db-cache-update.service: Consumed 28.684s CPU time.
Oct  9 05:10:48 np0005478304 systemd[1]: run-rf592e4cb76fd4d1b827cf36f706fff23.service: Deactivated successfully.
Oct  9 05:10:50 np0005478304 systemd-logind[743]: New session 6 of user zuul.
Oct  9 05:10:50 np0005478304 systemd[1]: Started Session 6 of User zuul.
Oct  9 05:10:50 np0005478304 python3[26214]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:51 np0005478304 python3[26240]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:51 np0005478304 python3[26266]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005478304 update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  9 05:10:51 np0005478304 python3[26300]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFxh/nv6sQLW1yzvGqXNfnJZOZRxYC8qJcgS1V4mG6Ez91eTuQ+QeRIx7PiC27aRMgFhv+XrMbKb0XUoGYd1TGk= zuul@np0005478301#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  9 05:10:52 np0005478304 python3[26378]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:10:52 np0005478304 python3[26451]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001052.1061735-155-48100563814734/source _original_basename=tmpoo7i5186 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:10:53 np0005478304 python3[26501]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Oct  9 05:10:53 np0005478304 systemd[1]: Starting Hostname Service...
Oct  9 05:10:53 np0005478304 systemd[1]: Started Hostname Service.
Oct  9 05:10:53 np0005478304 systemd-hostnamed[26505]: Changed pretty hostname to 'compute-2'
Oct  9 05:10:53 np0005478304 systemd-hostnamed[26505]: Hostname set to <compute-2> (static)
Oct  9 05:10:53 np0005478304 NetworkManager[3910]: <info>  [1760001053.3715] hostname: static hostname changed from "np0005478304" to "compute-2"
Oct  9 05:10:53 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:10:53 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:10:54 np0005478304 systemd-logind[743]: Session 6 logged out. Waiting for processes to exit.
Oct  9 05:10:54 np0005478304 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 05:10:54 np0005478304 systemd[1]: session-6.scope: Consumed 1.706s CPU time.
Oct  9 05:10:54 np0005478304 systemd-logind[743]: Removed session 6.
Oct  9 05:11:03 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:11:23 np0005478304 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:14:04 np0005478304 systemd-logind[743]: New session 7 of user zuul.
Oct  9 05:14:04 np0005478304 systemd[1]: Started Session 7 of User zuul.
Oct  9 05:14:05 np0005478304 python3[26600]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:14:06 np0005478304 python3[26712]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478304 python3[26785]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=delorean.repo follow=False checksum=e6ffbe2bc1ecfd38ca5198d3750b43ac3a0e1ed6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478304 python3[26811]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478304 python3[26884]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=717d1fa230cffa8c08764d71bd0b4a50d3a90cae backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:07 np0005478304 python3[26910]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:07 np0005478304 python3[26983]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=8163d09913b97597f86e38eb45c3003e91da783e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478304 python3[27009]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478304 python3[27082]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=d108d0750ad5b288ccc41bc6534ea307cc51e987 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478304 python3[27108]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:08 np0005478304 python3[27181]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=20c3917c672c059a872cf09a437f61890d2f89fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:08 np0005478304 python3[27207]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478304 python3[27280]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=4d14f168e8a0e6930d905faffbcdf4fedd6664d0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:09 np0005478304 python3[27306]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 05:14:09 np0005478304 python3[27379]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1760001246.4993014-30890-153937197119772/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:14:18 np0005478304 python3[27427]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:15:31 np0005478304 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 05:15:31 np0005478304 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 05:15:31 np0005478304 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 05:15:31 np0005478304 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 05:15:31 np0005478304 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 05:19:18 np0005478304 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 05:19:18 np0005478304 systemd[1]: session-7.scope: Consumed 3.445s CPU time.
Oct  9 05:19:18 np0005478304 systemd-logind[743]: Session 7 logged out. Waiting for processes to exit.
Oct  9 05:19:18 np0005478304 systemd-logind[743]: Removed session 7.
Oct  9 05:24:26 np0005478304 systemd-logind[743]: New session 8 of user zuul.
Oct  9 05:24:26 np0005478304 systemd[1]: Started Session 8 of User zuul.
Oct  9 05:24:26 np0005478304 python3.9[27587]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:27 np0005478304 python3.9[27768]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:36 np0005478304 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 05:24:36 np0005478304 systemd[1]: session-8.scope: Consumed 6.161s CPU time.
Oct  9 05:24:36 np0005478304 systemd-logind[743]: Session 8 logged out. Waiting for processes to exit.
Oct  9 05:24:36 np0005478304 systemd-logind[743]: Removed session 8.
Oct  9 05:24:37 np0005478304 irqbalance[738]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 05:24:37 np0005478304 irqbalance[738]: IRQ 43 affinity is now unmanaged
Oct  9 05:24:51 np0005478304 systemd-logind[743]: New session 9 of user zuul.
Oct  9 05:24:51 np0005478304 systemd[1]: Started Session 9 of User zuul.
Oct  9 05:24:51 np0005478304 python3.9[27978]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 05:24:52 np0005478304 python3.9[28152]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:53 np0005478304 python3.9[28304]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:24:53 np0005478304 python3.9[28457]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:24:54 np0005478304 python3.9[28609]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:55 np0005478304 python3.9[28761]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:24:55 np0005478304 python3.9[28884]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760001894.848628-179-135944304727514/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:24:56 np0005478304 python3.9[29036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:24:56 np0005478304 python3.9[29192]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:24:57 np0005478304 python3.9[29342]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:24:59 np0005478304 python3.9[29597]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:25:00 np0005478304 python3.9[29747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:01 np0005478304 python3.9[29901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478304 python3.9[30059]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:25:02 np0005478304 python3.9[30143]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:26:18 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:18 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:18 np0005478304 dbus-broker-launch[9259]: Noticed file-system modification, trigger reload.
Oct  9 05:26:18 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:18 np0005478304 dbus-broker-launch[9259]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  9 05:26:18 np0005478304 dbus-broker-launch[9259]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  9 05:26:18 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:18 np0005478304 systemd[1]: Reexecuting.
Oct  9 05:26:18 np0005478304 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 05:26:18 np0005478304 systemd: Detected virtualization kvm.
Oct  9 05:26:18 np0005478304 systemd: Detected architecture x86-64.
Oct  9 05:26:18 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:18 np0005478304 systemd[1]: Reloading.
Oct  9 05:26:18 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:18 np0005478304 systemd[1]: Starting dnf makecache...
Oct  9 05:26:18 np0005478304 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  9 05:26:18 np0005478304 dnf[30423]: Failed determining last makecache time.
Oct  9 05:26:18 np0005478304 dnf[30423]: delorean-openstack-barbican-42b4c41831408a8e323  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-openstack-cinder-1c00d6490d88e436f26ef  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 systemd[1]: Reloading.
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-python-stevedore-c4acc5639fd2329372142  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:19 np0005478304 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 05:26:19 np0005478304 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-python-cloudkitty-tests-tempest-3961dc  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 systemd[1]: Reloading.
Oct  9 05:26:19 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:26:19 np0005478304 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-diskimage-builder-43381184423c185801b5  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:19 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:19 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:19 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:26:19 np0005478304 dnf[30423]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-python-designate-tests-tempest-347fdbc  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-openstack-glance-1fd12c29b339f30fe823e  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-openstack-keystone-e4b40af0ae3698fbbbb  18 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-openstack-manila-3c01b7181572c95dac462  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-python-vmware-nsxlib-458234972d1428ac9  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-openstack-octavia-ba397f07a7331190208c  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:20 np0005478304 dnf[30423]: delorean-openstack-watcher-c014f81a8647287f6dcc  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:21 np0005478304 dnf[30423]: delorean-edpm-image-builder-55ba53cf215b14ed95b  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:21 np0005478304 dnf[30423]: delorean-puppet-ceph-b0c245ccde541a63fde0564366  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:21 np0005478304 dnf[30423]: delorean-openstack-swift-dc98a8463506ac520c469a  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:21 np0005478304 dnf[30423]: delorean-python-tempestconf-8515371b7cceebd4282  19 kB/s | 3.0 kB     00:00
Oct  9 05:26:21 np0005478304 dnf[30423]: delorean-openstack-heat-ui-013accbfd179753bc3f0  20 kB/s | 3.0 kB     00:00
Oct  9 05:26:23 np0005478304 dnf[30423]: CentOS Stream 9 - BaseOS                        4.3 kB/s | 6.1 kB     00:01
Oct  9 05:26:23 np0005478304 dnf[30423]: CentOS Stream 9 - AppStream                      16 kB/s | 6.5 kB     00:00
Oct  9 05:26:24 np0005478304 dnf[30423]: CentOS Stream 9 - CRB                            18 kB/s | 6.0 kB     00:00
Oct  9 05:26:24 np0005478304 dnf[30423]: CentOS Stream 9 - Extras packages                17 kB/s | 8.0 kB     00:00
Oct  9 05:26:24 np0005478304 dnf[30423]: dlrn-antelope-testing                            20 kB/s | 3.0 kB     00:00
Oct  9 05:26:24 np0005478304 dnf[30423]: dlrn-antelope-build-deps                         20 kB/s | 3.0 kB     00:00
Oct  9 05:26:26 np0005478304 dnf[30423]: centos9-rabbitmq                                1.6 kB/s | 3.0 kB     00:01
Oct  9 05:26:27 np0005478304 dnf[30423]: centos9-storage                                 7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:27 np0005478304 dnf[30423]: centos9-opstools                                7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:28 np0005478304 dnf[30423]: NFV SIG OpenvSwitch                             7.1 kB/s | 3.0 kB     00:00
Oct  9 05:26:28 np0005478304 dnf[30423]: repo-setup-centos-appstream                      10 kB/s | 4.4 kB     00:00
Oct  9 05:26:29 np0005478304 dnf[30423]: repo-setup-centos-baseos                        9.2 kB/s | 3.9 kB     00:00
Oct  9 05:26:29 np0005478304 dnf[30423]: repo-setup-centos-highavailability              9.0 kB/s | 3.9 kB     00:00
Oct  9 05:26:30 np0005478304 dnf[30423]: repo-setup-centos-powertools                    3.5 kB/s | 4.3 kB     00:01
Oct  9 05:26:31 np0005478304 dnf[30423]: Extra Packages for Enterprise Linux 9 - x86_64   78 kB/s |  30 kB     00:00
Oct  9 05:26:31 np0005478304 dnf[30423]: Metadata cache created.
Oct  9 05:26:31 np0005478304 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 05:26:31 np0005478304 systemd[1]: Finished dnf makecache.
Oct  9 05:26:31 np0005478304 systemd[1]: dnf-makecache.service: Consumed 1.264s CPU time.
Oct  9 05:27:04 np0005478304 kernel: SELinux:  Converting 2716 SID table entries...
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:27:04 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:27:04 np0005478304 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  9 05:27:04 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:04 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:04 np0005478304 systemd[1]: Reloading.
Oct  9 05:27:04 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:04 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:04 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:04 np0005478304 systemd: Stopping Journal Service...
Oct  9 05:27:04 np0005478304 systemd: Stopping Rule-based Manager for Device Events and Files...
Oct  9 05:27:04 np0005478304 systemd-journald[650]: Received SIGTERM from PID 1 (systemd).
Oct  9 05:27:04 np0005478304 systemd-journald[650]: Journal stopped
Oct  9 05:27:04 np0005478304 systemd: systemd-journald.service: Deactivated successfully.
Oct  9 05:27:04 np0005478304 systemd: Stopped Journal Service.
Oct  9 05:27:04 np0005478304 systemd: Starting Journal Service...
Oct  9 05:27:04 np0005478304 systemd: systemd-udevd.service: Deactivated successfully.
Oct  9 05:27:04 np0005478304 systemd: Stopped Rule-based Manager for Device Events and Files.
Oct  9 05:27:04 np0005478304 systemd: systemd-udevd.service: Consumed 1.401s CPU time.
Oct  9 05:27:04 np0005478304 systemd: Starting Rule-based Manager for Device Events and Files...
Oct  9 05:27:04 np0005478304 systemd-journald[30905]: Journal started
Oct  9 05:27:04 np0005478304 systemd-journald[30905]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 05:27:04 np0005478304 systemd: Started Journal Service.
Oct  9 05:27:04 np0005478304 systemd-udevd[30916]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 05:27:04 np0005478304 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 05:27:04 np0005478304 systemd[1]: Reloading.
Oct  9 05:27:04 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:04 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:05 np0005478304 systemd[1]: Starting PackageKit Daemon...
Oct  9 05:27:05 np0005478304 systemd[1]: Started PackageKit Daemon.
Oct  9 05:27:08 np0005478304 python3.9[36936]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:09 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:09 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:09 np0005478304 systemd[1]: man-db-cache-update.service: Consumed 6.143s CPU time.
Oct  9 05:27:09 np0005478304 systemd[1]: run-r9e0cbb3831364c6ea85a302cd5bea0eb.service: Deactivated successfully.
Oct  9 05:27:09 np0005478304 systemd[1]: run-r2bd799839f554ddbab96b54278f089df.service: Deactivated successfully.
Oct  9 05:27:09 np0005478304 python3.9[38964]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 05:27:10 np0005478304 python3.9[39116]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 05:27:12 np0005478304 python3.9[39269]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:12 np0005478304 python3.9[39421]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 05:27:14 np0005478304 python3.9[39573]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:14 np0005478304 python3.9[39725]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:14 np0005478304 python3.9[39848]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002034.187947-642-250235853355983/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:27:16 np0005478304 python3.9[40000]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 05:27:19 np0005478304 python3.9[40153]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:19 np0005478304 python3.9[40311]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:19 np0005478304 rsyslogd[961]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 05:27:20 np0005478304 python3.9[40472]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 05:27:20 np0005478304 python3.9[40625]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:21 np0005478304 python3.9[40783]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 05:27:22 np0005478304 python3.9[40935]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:23 np0005478304 python3.9[41088]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:24 np0005478304 python3.9[41240]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:24 np0005478304 python3.9[41363]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002043.8496084-927-234443019386778/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:25 np0005478304 python3.9[41515]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:25 np0005478304 systemd[1]: Starting Load Kernel Modules...
Oct  9 05:27:25 np0005478304 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 05:27:25 np0005478304 systemd-modules-load[41519]: Inserted module 'br_netfilter'
Oct  9 05:27:25 np0005478304 kernel: Bridge firewalling registered
Oct  9 05:27:25 np0005478304 systemd[1]: Finished Load Kernel Modules.
Oct  9 05:27:25 np0005478304 python3.9[41674]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:27:26 np0005478304 python3.9[41797]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002045.5673218-996-238138128355400/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:27:26 np0005478304 python3.9[41949]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:27:32 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:27:32 np0005478304 dbus-broker-launch[711]: Noticed file-system modification, trigger reload.
Oct  9 05:27:32 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:27:32 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:27:32 np0005478304 systemd[1]: Reloading.
Oct  9 05:27:32 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:32 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:27:33 np0005478304 python3.9[43446]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:34 np0005478304 python3.9[44771]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 05:27:35 np0005478304 python3.9[45660]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:27:35 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:27:35 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:27:35 np0005478304 systemd[1]: man-db-cache-update.service: Consumed 2.924s CPU time.
Oct  9 05:27:35 np0005478304 systemd[1]: run-rb39578d4143a4fa7bedd8f73265e6084.service: Deactivated successfully.
Oct  9 05:27:35 np0005478304 python3.9[45962]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:35 np0005478304 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478304 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478304 python3.9[46335]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:36 np0005478304 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478304 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 05:27:36 np0005478304 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 05:27:36 np0005478304 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 05:27:36 np0005478304 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 05:27:37 np0005478304 python3.9[46497]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 05:27:40 np0005478304 python3.9[46649]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478304 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:40 np0005478304 python3.9[46839]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:27:40 np0005478304 systemd[1]: Reloading.
Oct  9 05:27:40 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:27:41 np0005478304 python3.9[47028]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:41 np0005478304 python3.9[47181]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:42 np0005478304 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 05:27:42 np0005478304 python3.9[47334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:44 np0005478304 python3.9[47496]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:27:44 np0005478304 python3.9[47649]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:27:44 np0005478304 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 05:27:44 np0005478304 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 05:27:44 np0005478304 systemd[1]: Stopping Apply Kernel Variables...
Oct  9 05:27:44 np0005478304 systemd[1]: Starting Apply Kernel Variables...
Oct  9 05:27:44 np0005478304 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  9 05:27:44 np0005478304 systemd[1]: Finished Apply Kernel Variables.
Oct  9 05:27:45 np0005478304 systemd-logind[743]: Session 9 logged out. Waiting for processes to exit.
Oct  9 05:27:45 np0005478304 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 05:27:45 np0005478304 systemd[1]: session-9.scope: Consumed 1min 38.582s CPU time.
Oct  9 05:27:45 np0005478304 systemd-logind[743]: Removed session 9.
Oct  9 05:27:50 np0005478304 systemd-logind[743]: New session 10 of user zuul.
Oct  9 05:27:50 np0005478304 systemd[1]: Started Session 10 of User zuul.
Oct  9 05:27:51 np0005478304 python3.9[47833]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:27:52 np0005478304 python3.9[47989]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 05:27:52 np0005478304 python3.9[48142]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 05:27:53 np0005478304 python3.9[48300]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 05:27:54 np0005478304 python3.9[48460]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:27:54 np0005478304 python3.9[48544]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:28:04 np0005478304 python3.9[48709]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:12 np0005478304 kernel: SELinux:  Converting 2726 SID table entries...
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:12 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:13 np0005478304 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  9 05:28:13 np0005478304 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 05:28:13 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:13 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:13 np0005478304 systemd[1]: Reloading.
Oct  9 05:28:13 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:13 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:14 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:14 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:14 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:14 np0005478304 systemd[1]: run-ra7a053c84e0e4aa3961c53354a190a28.service: Deactivated successfully.
Oct  9 05:28:15 np0005478304 python3.9[49811]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 05:28:15 np0005478304 systemd[1]: Reloading.
Oct  9 05:28:15 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:15 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:15 np0005478304 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 05:28:15 np0005478304 chown[49853]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 05:28:15 np0005478304 ovs-ctl[49858]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  9 05:28:15 np0005478304 ovs-ctl[49858]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  9 05:28:15 np0005478304 ovs-ctl[49858]: Starting ovsdb-server [  OK  ]
Oct  9 05:28:15 np0005478304 ovs-vsctl[49907]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 05:28:15 np0005478304 ovs-vsctl[49927]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c24becb7-a313-4586-a73e-1530a4367da3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 05:28:15 np0005478304 ovs-ctl[49858]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 05:28:15 np0005478304 ovs-ctl[49858]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478304 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 05:28:15 np0005478304 ovs-vsctl[49933]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  9 05:28:15 np0005478304 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 05:28:15 np0005478304 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 05:28:15 np0005478304 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 05:28:15 np0005478304 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 05:28:15 np0005478304 ovs-ctl[49977]: Inserting openvswitch module [  OK  ]
Oct  9 05:28:15 np0005478304 ovs-ctl[49946]: Starting ovs-vswitchd [  OK  ]
Oct  9 05:28:15 np0005478304 ovs-ctl[49946]: Enabling remote OVSDB managers [  OK  ]
Oct  9 05:28:15 np0005478304 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 05:28:15 np0005478304 ovs-vsctl[49995]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  9 05:28:15 np0005478304 systemd[1]: Starting Open vSwitch...
Oct  9 05:28:15 np0005478304 systemd[1]: Finished Open vSwitch.
Oct  9 05:28:16 np0005478304 python3.9[50146]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:17 np0005478304 python3.9[50298]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 05:28:18 np0005478304 kernel: SELinux:  Converting 2740 SID table entries...
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability open_perms=1
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability always_check_network=0
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 05:28:18 np0005478304 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 05:28:18 np0005478304 python3.9[50453]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:28:19 np0005478304 dbus-broker-launch[732]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  9 05:28:19 np0005478304 python3.9[50611]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:20 np0005478304 python3.9[50764]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:28:22 np0005478304 python3.9[51051]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 05:28:22 np0005478304 python3.9[51201]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:23 np0005478304 python3.9[51355]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:25 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:25 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:25 np0005478304 systemd[1]: Reloading.
Oct  9 05:28:25 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:25 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:25 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:25 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:25 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:25 np0005478304 systemd[1]: run-r92eaa97a310244b886de8e56e2a049f5.service: Deactivated successfully.
Oct  9 05:28:26 np0005478304 python3.9[51672]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:27 np0005478304 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  9 05:28:27 np0005478304 systemd[1]: Stopped Network Manager Wait Online.
Oct  9 05:28:27 np0005478304 systemd[1]: Stopping Network Manager Wait Online...
Oct  9 05:28:27 np0005478304 systemd[1]: Stopping Network Manager...
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0098] caught SIGTERM, shutting down normally.
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0108] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0108] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0108] dhcp4 (eth0): state changed no lease
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0109] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0109] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0109] dhcp6 (eth0): state changed no lease
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0111] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:27 np0005478304 NetworkManager[3910]: <info>  [1760002107.0151] exiting (success)
Oct  9 05:28:27 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:27 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:27 np0005478304 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  9 05:28:27 np0005478304 systemd[1]: Stopped Network Manager.
Oct  9 05:28:27 np0005478304 systemd[1]: Starting Network Manager...
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.0581] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:f4377bae-7107-4315-9822-dc318aaac0ab)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.0582] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.0622] manager[0x55f1d6e82090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 05:28:27 np0005478304 systemd[1]: Starting Hostname Service...
Oct  9 05:28:27 np0005478304 systemd[1]: Started Hostname Service.
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1184] hostname: hostname: using hostnamed
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1185] hostname: static hostname changed from (none) to "compute-2"
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1187] dns-mgr: init: dns=none,systemd-resolved rc-manager=unmanaged
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1190] manager[0x55f1d6e82090]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1190] manager[0x55f1d6e82090]: rfkill: WWAN hardware radio set enabled
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1207] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1214] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1216] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1217] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1217] manager: Networking is enabled by state file
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1219] settings: Loaded settings plugin: keyfile (internal)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1223] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1242] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1249] dhcp: init: Using DHCP client 'internal'
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1251] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1255] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1259] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1264] device (lo): Activation: starting connection 'lo' (c4b4942b-b288-4887-b67b-02123977123c)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1269] device (eth0): carrier: link connected
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1272] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1275] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1276] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1280] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1285] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1289] device (eth1): carrier: link connected
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1292] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1296] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98) (indicated)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1296] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1300] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1304] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 05:28:27 np0005478304 systemd[1]: Started Network Manager.
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1336] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1341] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1343] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1344] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1345] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1347] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1348] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1350] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1351] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1355] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1357] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1359] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1365] policy: set 'System eth0' (eth0) as default for IPv6 routing and DNS
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1367] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1373] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:27 np0005478304 systemd[1]: Starting Network Manager Wait Online...
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1394] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1404] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1471] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1474] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1476] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1482] device (lo): Activation: successful, device activated.
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1488] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1493] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  9 05:28:27 np0005478304 NetworkManager[51682]: <info>  [1760002107.1496] device (eth1): Activation: successful, device activated.
Oct  9 05:28:27 np0005478304 python3.9[51881]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.1979] dhcp6 (eth0): state changed new lease, address=2001:db8::372
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.1989] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2019] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2021] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2023] manager: NetworkManager state is now CONNECTED_SITE
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2031] device (eth0): Activation: successful, device activated.
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2035] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 05:28:28 np0005478304 NetworkManager[51682]: <info>  [1760002108.2037] manager: startup complete
Oct  9 05:28:28 np0005478304 systemd[1]: Finished Network Manager Wait Online.
Oct  9 05:28:34 np0005478304 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 05:28:34 np0005478304 systemd[1]: Starting man-db-cache-update.service...
Oct  9 05:28:34 np0005478304 systemd[1]: Reloading.
Oct  9 05:28:34 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:28:34 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:28:34 np0005478304 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 05:28:34 np0005478304 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 05:28:34 np0005478304 systemd[1]: Finished man-db-cache-update.service.
Oct  9 05:28:34 np0005478304 systemd[1]: run-r8c760d284d8742dcb5753618f54da26c.service: Deactivated successfully.
Oct  9 05:28:36 np0005478304 python3.9[52364]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:28:37 np0005478304 python3.9[52516]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:37 np0005478304 python3.9[52670]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478304 python3.9[52822]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:38 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:28:38 np0005478304 python3.9[52976]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478304 python3.9[53128]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:39 np0005478304 python3.9[53280]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:40 np0005478304 python3.9[53403]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002119.2627075-649-153729194602762/.source _original_basename=.k0rfzzzh follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:40 np0005478304 python3.9[53555]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:41 np0005478304 python3.9[53707]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  9 05:28:41 np0005478304 python3.9[53859]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:43 np0005478304 python3.9[54286]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  9 05:28:44 np0005478304 ansible-async_wrapper.py[54461]: Invoked with j795977429977 300 /home/zuul/.ansible/tmp/ansible-tmp-1760002123.5857875-847-192465475091583/AnsiballZ_edpm_os_net_config.py _
Oct  9 05:28:44 np0005478304 ansible-async_wrapper.py[54464]: Starting module and watcher
Oct  9 05:28:44 np0005478304 ansible-async_wrapper.py[54464]: Start watching 54465 (300)
Oct  9 05:28:44 np0005478304 ansible-async_wrapper.py[54465]: Start module (54465)
Oct  9 05:28:44 np0005478304 ansible-async_wrapper.py[54461]: Return async_wrapper task started.
Oct  9 05:28:44 np0005478304 python3.9[54466]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  9 05:28:44 np0005478304 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 05:28:44 np0005478304 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 05:28:44 np0005478304 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 05:28:44 np0005478304 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 05:28:44 np0005478304 kernel: cfg80211: failed to load regulatory.db
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7205] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7223] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7622] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7624] audit: op="connection-add" uuid="e74d22ec-f198-4e0f-be1a-1f80d32c41d9" name="br-ex-br" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7635] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7636] audit: op="connection-add" uuid="8ffce742-fb38-47d1-9133-f81b3e7c1d96" name="br-ex-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7646] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7647] audit: op="connection-add" uuid="ea9c5477-1c68-4281-8c51-5d80fd4aa6e4" name="eth1-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7657] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7658] audit: op="connection-add" uuid="13e69b35-1d02-48f3-8023-3685cdfc9a88" name="vlan20-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7667] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7668] audit: op="connection-add" uuid="084745e1-4043-483a-ab5e-f09ef7745634" name="vlan21-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7678] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7679] audit: op="connection-add" uuid="945df9e4-941f-40b7-8466-cc015b98ce41" name="vlan22-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7689] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7691] audit: op="connection-add" uuid="406aaad8-f358-4cbe-959e-1924644e4828" name="vlan23-port" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7708] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.routes,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7721] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7723] audit: op="connection-add" uuid="e5a1a2e1-7841-4bc2-871e-a31627de4c3f" name="br-ex-if" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7745] audit: op="connection-update" uuid="14fca061-f236-5fd4-a05f-8577fd3a8a98" name="ci-private-network" args="connection.port-type,connection.master,connection.slave-type,connection.controller,connection.timestamp,ovs-interface.type,ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.addr-gen-mode,ipv6.method,ipv6.dns,ovs-external-ids.data,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.method,ipv4.never-default,ipv4.dns" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7758] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7760] audit: op="connection-add" uuid="6a31aae5-a24d-49f8-9056-2c0284cb05d9" name="vlan20-if" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7772] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7773] audit: op="connection-add" uuid="e10720fe-4a3a-42fc-9eab-40565799ce5b" name="vlan21-if" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7787] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7788] audit: op="connection-add" uuid="508ad833-78ea-45c2-a626-2e5f91da07cd" name="vlan22-if" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7801] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7802] audit: op="connection-add" uuid="0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795" name="vlan23-if" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7811] audit: op="connection-delete" uuid="f7132f8c-425a-3b32-8a6c-585b88f55c3f" name="Wired connection 1" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7820] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7827] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7830] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e74d22ec-f198-4e0f-be1a-1f80d32c41d9)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7831] audit: op="connection-activate" uuid="e74d22ec-f198-4e0f-be1a-1f80d32c41d9" name="br-ex-br" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7832] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7837] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7841] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7842] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7846] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7849] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7850] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7854] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7857] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7859] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7863] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7866] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7868] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7873] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7875] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7877] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7881] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7886] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7887] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7888] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7889] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7894] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7897] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7901] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e5a1a2e1-7841-4bc2-871e-a31627de4c3f)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7901] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7903] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7904] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7905] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7906] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7920] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7921] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7923] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7924] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7924] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7926] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7929] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7933] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7933] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7935] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7936] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7938] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7939] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7942] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7945] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7946] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7948] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7949] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7949] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7951] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7955] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7958] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (508ad833-78ea-45c2-a626-2e5f91da07cd)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7958] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7960] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7961] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7962] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7964] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7967] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7970] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7971] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7973] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7975] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7976] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7977] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7989] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.routes,ipv6.may-fail,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7990] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7993] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.7994] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8003] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8007] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: ovs-system: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8021] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8024] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8026] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8029] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: Timeout policy base is empty
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8032] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8033] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8034] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 systemd-udevd[54472]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8037] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8039] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8041] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8042] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8045] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8047] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8049] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8056] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8059] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8062] dhcp4 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8062] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8062] dhcp4 (eth0): state changed no lease
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8085] dhcp6 (eth0): canceled DHCP transaction
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8095] dhcp6 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8096] dhcp6 (eth0): state changed no lease
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8100] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8112] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8115] audit: op="device-reapply" interface="eth1" ifindex=3 pid=54467 uid=0 result="fail" reason="Device is not activated"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8120] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8126] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8148] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8154] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8157] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8189] device (eth1): disconnecting for new activation request.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8189] audit: op="connection-activate" uuid="14fca061-f236-5fd4-a05f-8577fd3a8a98" name="ci-private-network" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8193] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: br-ex: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8246] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8262] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8264] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8270] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54467 uid=0 result="success"
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8271] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8280] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8281] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8282] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8282] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8283] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8284] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8286] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8291] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8293] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8296] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8298] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8305] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8308] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8310] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8313] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8315] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: vlan22: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8324] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 systemd-udevd[54473]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8333] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8338] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8341] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8344] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8349] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8353] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8359] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8383] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: vlan21: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8437] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8440] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8443] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8450] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8456] device (eth1): Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8467] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: vlan20: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8485] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8502] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8534] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478304 kernel: vlan23: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8553] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8561] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8564] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8569] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8633] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8633] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8636] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8639] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8645] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8664] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8668] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8676] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8676] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8681] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8687] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8688] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 05:28:45 np0005478304 NetworkManager[51682]: <info>  [1760002125.8692] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 05:28:46 np0005478304 NetworkManager[51682]: <info>  [1760002126.9816] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.1053] checkpoint[0x55f1d6e5a950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.1055] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.2319] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.2333] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.4115] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.5346] checkpoint[0x55f1d6e5aa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.5349] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.7705] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.7719] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 python3.9[54821]: ansible-ansible.legacy.async_status Invoked with jid=j795977429977.54461 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.9373] audit: op="networking-control" arg="global-dns-configuration" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.9386] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf)
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.9393] audit: op="networking-control" arg="global-dns-configuration" pid=54467 uid=0 result="success"
Oct  9 05:28:47 np0005478304 NetworkManager[51682]: <info>  [1760002127.9412] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54467 uid=0 result="success"
Oct  9 05:28:48 np0005478304 NetworkManager[51682]: <info>  [1760002128.0619] checkpoint[0x55f1d6e5aaf0]: destroy /org/freedesktop/NetworkManager/Checkpoint/3
Oct  9 05:28:48 np0005478304 NetworkManager[51682]: <info>  [1760002128.0624] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/3" pid=54467 uid=0 result="success"
Oct  9 05:28:48 np0005478304 ansible-async_wrapper.py[54465]: Module complete (54465)
Oct  9 05:28:49 np0005478304 ansible-async_wrapper.py[54464]: Done in kid B.
Oct  9 05:28:51 np0005478304 python3.9[54926]: ansible-ansible.legacy.async_status Invoked with jid=j795977429977.54461 mode=status _async_dir=/root/.ansible_async
Oct  9 05:28:51 np0005478304 python3.9[55026]: ansible-ansible.legacy.async_status Invoked with jid=j795977429977.54461 mode=cleanup _async_dir=/root/.ansible_async
Oct  9 05:28:52 np0005478304 python3.9[55178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:52 np0005478304 python3.9[55301]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002131.741668-928-267023380112279/.source.returncode _original_basename=.dgq7a_qk follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:52 np0005478304 python3.9[55453]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:28:53 np0005478304 python3.9[55576]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002132.6573584-976-126987722013265/.source.cfg _original_basename=.7icfy7yk follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:28:53 np0005478304 python3.9[55728]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:28:53 np0005478304 systemd[1]: Reloading Network Manager...
Oct  9 05:28:53 np0005478304 NetworkManager[51682]: <info>  [1760002133.9987] audit: op="reload" arg="0" pid=55732 uid=0 result="success"
Oct  9 05:28:53 np0005478304 NetworkManager[51682]: <info>  [1760002133.9992] config: signal: SIGHUP,config-files,values,values-user,no-auto-default,dns-mode (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  9 05:28:53 np0005478304 NetworkManager[51682]: <info>  [1760002133.9993] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 05:28:54 np0005478304 systemd[1]: Reloaded Network Manager.
Oct  9 05:28:54 np0005478304 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 05:28:54 np0005478304 systemd[1]: session-10.scope: Consumed 35.902s CPU time.
Oct  9 05:28:54 np0005478304 systemd-logind[743]: Session 10 logged out. Waiting for processes to exit.
Oct  9 05:28:54 np0005478304 systemd-logind[743]: Removed session 10.
Oct  9 05:28:57 np0005478304 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 05:28:59 np0005478304 systemd-logind[743]: New session 11 of user zuul.
Oct  9 05:28:59 np0005478304 systemd[1]: Started Session 11 of User zuul.
Oct  9 05:29:00 np0005478304 python3.9[55918]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:00 np0005478304 python3.9[56072]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:01 np0005478304 python3.9[56266]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:01 np0005478304 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 05:29:01 np0005478304 systemd[1]: session-11.scope: Consumed 1.622s CPU time.
Oct  9 05:29:01 np0005478304 systemd-logind[743]: Session 11 logged out. Waiting for processes to exit.
Oct  9 05:29:01 np0005478304 systemd-logind[743]: Removed session 11.
Oct  9 05:29:04 np0005478304 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 05:29:06 np0005478304 systemd-logind[743]: New session 12 of user zuul.
Oct  9 05:29:06 np0005478304 systemd[1]: Started Session 12 of User zuul.
Oct  9 05:29:07 np0005478304 python3.9[56448]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:08 np0005478304 python3.9[56602]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478304 python3.9[56758]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:09 np0005478304 python3.9[56842]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:11 np0005478304 python3.9[56996]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:12 np0005478304 python3.9[57191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:12 np0005478304 python3.9[57343]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:29:12 np0005478304 podman[57344]: 2025-10-09 09:29:12.840362997 +0000 UTC m=+0.026023817 system refresh
Oct  9 05:29:13 np0005478304 python3.9[57503]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:13 np0005478304 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 05:29:13 np0005478304 python3.9[57627]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002152.978935-199-31220031632792/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0ef959cf12f87864d9dd64a3441ed9a6f21d8888 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:14 np0005478304 python3.9[57779]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:14 np0005478304 python3.9[57902]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002154.0543408-244-89885065203652/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478304 python3.9[58054]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:15 np0005478304 python3.9[58206]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478304 python3.9[58358]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:16 np0005478304 python3.9[58510]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:17 np0005478304 python3.9[58663]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:18 np0005478304 python3.9[58816]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:19 np0005478304 python3.9[58970]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:19 np0005478304 python3.9[59122]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:29:20 np0005478304 python3.9[59274]: ansible-service_facts Invoked
Oct  9 05:29:20 np0005478304 network[59291]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:29:20 np0005478304 network[59292]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:29:20 np0005478304 network[59293]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:29:23 np0005478304 python3.9[59747]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 05:29:25 np0005478304 python3.9[59900]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 05:29:26 np0005478304 python3.9[60052]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:27 np0005478304 python3.9[60177]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002166.5722222-642-99697439073434/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:28 np0005478304 python3.9[60331]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:28 np0005478304 python3.9[60456]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002168.1039004-688-10964541435107/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:30 np0005478304 python3.9[60610]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:31 np0005478304 python3.9[60764]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:32 np0005478304 python3.9[60848]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:33 np0005478304 python3.9[61002]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:29:34 np0005478304 python3.9[61086]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:29:34 np0005478304 chronyd[756]: chronyd exiting
Oct  9 05:29:34 np0005478304 systemd[1]: Stopping NTP client/server...
Oct  9 05:29:34 np0005478304 systemd[1]: chronyd.service: Deactivated successfully.
Oct  9 05:29:34 np0005478304 systemd[1]: Stopped NTP client/server.
Oct  9 05:29:34 np0005478304 systemd[1]: Starting NTP client/server...
Oct  9 05:29:34 np0005478304 chronyd[61094]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 05:29:34 np0005478304 chronyd[61094]: Frequency -10.736 +/- 0.303 ppm read from /var/lib/chrony/drift
Oct  9 05:29:34 np0005478304 chronyd[61094]: Loaded seccomp filter (level 2)
Oct  9 05:29:34 np0005478304 systemd[1]: Started NTP client/server.
Oct  9 05:29:34 np0005478304 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 05:29:34 np0005478304 systemd[1]: session-12.scope: Consumed 17.564s CPU time.
Oct  9 05:29:34 np0005478304 systemd-logind[743]: Session 12 logged out. Waiting for processes to exit.
Oct  9 05:29:34 np0005478304 systemd-logind[743]: Removed session 12.
Oct  9 05:29:39 np0005478304 systemd-logind[743]: New session 13 of user zuul.
Oct  9 05:29:39 np0005478304 systemd[1]: Started Session 13 of User zuul.
Oct  9 05:29:40 np0005478304 python3.9[61275]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:40 np0005478304 python3.9[61427]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:41 np0005478304 python3.9[61550]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002180.3498878-64-181304161752246/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:41 np0005478304 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 05:29:41 np0005478304 systemd[1]: session-13.scope: Consumed 1.119s CPU time.
Oct  9 05:29:41 np0005478304 systemd-logind[743]: Session 13 logged out. Waiting for processes to exit.
Oct  9 05:29:41 np0005478304 systemd-logind[743]: Removed session 13.
Oct  9 05:29:47 np0005478304 systemd-logind[743]: New session 14 of user zuul.
Oct  9 05:29:47 np0005478304 systemd[1]: Started Session 14 of User zuul.
Oct  9 05:29:48 np0005478304 python3.9[61728]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:29:48 np0005478304 python3.9[61884]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:49 np0005478304 python3.9[62059]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:50 np0005478304 python3.9[62182]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1760002189.1027794-85-263101861882575/.source.json _original_basename=.jqp5bhde follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:51 np0005478304 python3.9[62334]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:51 np0005478304 python3.9[62457]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002190.922277-154-143999058433988/.source _original_basename=.6861yxbk follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:52 np0005478304 python3.9[62609]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:52 np0005478304 python3.9[62761]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:53 np0005478304 python3.9[62884]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002192.3912969-226-275493197117835/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:53 np0005478304 python3.9[63036]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:53 np0005478304 python3.9[63159]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002193.2198818-226-244282562462980/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:29:54 np0005478304 python3.9[63311]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:55 np0005478304 python3.9[63463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:55 np0005478304 python3.9[63586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002194.8252425-337-217834616884338/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:56 np0005478304 python3.9[63738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:56 np0005478304 python3.9[63861]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002195.7022088-382-114778397521787/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:57 np0005478304 python3.9[64013]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:29:57 np0005478304 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478304 systemd[1]: Reloading.
Oct  9 05:29:57 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:29:57 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:29:57 np0005478304 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 05:29:57 np0005478304 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 05:29:58 np0005478304 python3.9[64239]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:58 np0005478304 python3.9[64362]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002197.8679078-452-128757997353410/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:29:59 np0005478304 python3.9[64514]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:29:59 np0005478304 python3.9[64637]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002198.794467-497-4241426648655/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:00 np0005478304 python3.9[64789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:00 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:00 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:00 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:00 np0005478304 systemd[1]: Starting Create netns directory...
Oct  9 05:30:00 np0005478304 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:00 np0005478304 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:00 np0005478304 systemd[1]: Finished Create netns directory.
Oct  9 05:30:01 np0005478304 python3.9[65015]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:01 np0005478304 network[65032]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:01 np0005478304 network[65033]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:01 np0005478304 network[65034]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:30:03 np0005478304 python3.9[65298]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:03 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:04 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:04 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:04 np0005478304 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  9 05:30:04 np0005478304 iptables.init[65338]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  9 05:30:04 np0005478304 iptables.init[65338]: iptables: Flushing firewall rules: [  OK  ]
Oct  9 05:30:04 np0005478304 systemd[1]: iptables.service: Deactivated successfully.
Oct  9 05:30:04 np0005478304 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  9 05:30:05 np0005478304 python3.9[65534]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478304 python3.9[65688]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:05 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:05 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:05 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:05 np0005478304 systemd[1]: Starting Netfilter Tables...
Oct  9 05:30:05 np0005478304 systemd[1]: Finished Netfilter Tables.
Oct  9 05:30:06 np0005478304 python3.9[65880]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:30:07 np0005478304 python3.9[66033]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:07 np0005478304 python3.9[66158]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002207.1167295-703-200231020994189/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:08 np0005478304 python3.9[66309]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:30:33 np0005478304 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 05:30:33 np0005478304 systemd[1]: session-14.scope: Consumed 13.098s CPU time.
Oct  9 05:30:33 np0005478304 systemd-logind[743]: Session 14 logged out. Waiting for processes to exit.
Oct  9 05:30:33 np0005478304 systemd-logind[743]: Removed session 14.
Oct  9 05:30:45 np0005478304 systemd-logind[743]: New session 15 of user zuul.
Oct  9 05:30:45 np0005478304 systemd[1]: Started Session 15 of User zuul.
Oct  9 05:30:46 np0005478304 python3.9[66502]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:30:47 np0005478304 python3.9[66658]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:47 np0005478304 python3.9[66833]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:48 np0005478304 python3.9[66911]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.s6exviy8 recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:49 np0005478304 python3.9[67063]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:49 np0005478304 python3.9[67141]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.tzkal50v recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:50 np0005478304 python3.9[67293]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:50 np0005478304 python3.9[67445]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:50 np0005478304 python3.9[67523]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:51 np0005478304 python3.9[67675]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:51 np0005478304 python3.9[67753]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 05:30:52 np0005478304 python3.9[67905]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:52 np0005478304 python3.9[68057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:53 np0005478304 python3.9[68135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:53 np0005478304 python3.9[68287]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:54 np0005478304 python3.9[68365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:55 np0005478304 python3.9[68517]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:55 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:55 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:55 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:55 np0005478304 python3.9[68706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:56 np0005478304 python3.9[68784]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:56 np0005478304 python3.9[68936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:30:57 np0005478304 python3.9[69014]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:30:57 np0005478304 python3.9[69166]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 05:30:57 np0005478304 systemd[1]: Reloading.
Oct  9 05:30:57 np0005478304 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 05:30:57 np0005478304 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 05:30:58 np0005478304 systemd[1]: Starting Create netns directory...
Oct  9 05:30:58 np0005478304 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 05:30:58 np0005478304 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 05:30:58 np0005478304 systemd[1]: Finished Create netns directory.
Oct  9 05:30:58 np0005478304 python3.9[69356]: ansible-ansible.builtin.service_facts Invoked
Oct  9 05:30:58 np0005478304 network[69373]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 05:30:58 np0005478304 network[69374]: 'network-scripts' will be removed from distribution in near future.
Oct  9 05:30:58 np0005478304 network[69375]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 05:31:01 np0005478304 python3.9[69638]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:01 np0005478304 python3.9[69716]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478304 python3.9[69868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:02 np0005478304 python3.9[70020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:03 np0005478304 python3.9[70143]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002262.5098042-611-196559692299349/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:04 np0005478304 python3.9[70295]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 05:31:04 np0005478304 systemd[1]: Starting Time & Date Service...
Oct  9 05:31:04 np0005478304 systemd[1]: Started Time & Date Service.
Oct  9 05:31:05 np0005478304 python3.9[70451]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:05 np0005478304 python3.9[70603]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:06 np0005478304 python3.9[70726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002265.2474794-716-113819690673642/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:06 np0005478304 python3.9[70878]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:06 np0005478304 python3.9[71001]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002266.176149-761-239516292401385/.source.yaml _original_basename=.x1z9agy5 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:07 np0005478304 python3.9[71153]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:07 np0005478304 python3.9[71276]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002267.1216774-806-81207000309079/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:08 np0005478304 python3.9[71428]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478304 python3.9[71581]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:09 np0005478304 python3[71734]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 05:31:10 np0005478304 python3.9[71886]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:10 np0005478304 python3.9[72009]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002269.9740186-923-249375501736869/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:11 np0005478304 python3.9[72161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:11 np0005478304 python3.9[72284]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002270.897485-968-245284819531001/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:12 np0005478304 python3.9[72436]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:12 np0005478304 python3.9[72559]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002271.784852-1012-26159939216542/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:13 np0005478304 python3.9[72711]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:13 np0005478304 python3.9[72834]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002272.671044-1058-142600847700140/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:13 np0005478304 python3.9[72986]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 05:31:14 np0005478304 python3.9[73109]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002273.5549917-1103-173686724012912/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:14 np0005478304 python3.9[73261]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:15 np0005478304 python3.9[73413]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:16 np0005478304 python3.9[73572]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:16 np0005478304 python3.9[73725]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478304 python3.9[73877]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:17 np0005478304 python3.9[74029]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478304 python3.9[74182]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 05:31:18 np0005478304 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 05:31:18 np0005478304 systemd[1]: session-15.scope: Consumed 21.646s CPU time.
Oct  9 05:31:18 np0005478304 systemd-logind[743]: Session 15 logged out. Waiting for processes to exit.
Oct  9 05:31:18 np0005478304 systemd-logind[743]: Removed session 15.
Oct  9 05:31:23 np0005478304 systemd-logind[743]: New session 16 of user zuul.
Oct  9 05:31:23 np0005478304 systemd[1]: Started Session 16 of User zuul.
Oct  9 05:31:23 np0005478304 python3.9[74363]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 05:31:24 np0005478304 python3.9[74515]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:25 np0005478304 python3.9[74667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:25 np0005478304 python3.9[74819]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=#012 create=True mode=0644 path=/tmp/ansible._8tlvlrq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:26 np0005478304 python3.9[74971]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._8tlvlrq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:27 np0005478304 python3.9[75125]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._8tlvlrq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:27 np0005478304 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 05:31:27 np0005478304 systemd[1]: session-16.scope: Consumed 2.360s CPU time.
Oct  9 05:31:27 np0005478304 systemd-logind[743]: Session 16 logged out. Waiting for processes to exit.
Oct  9 05:31:27 np0005478304 systemd-logind[743]: Removed session 16.
Oct  9 05:31:32 np0005478304 systemd-logind[743]: New session 17 of user zuul.
Oct  9 05:31:32 np0005478304 systemd[1]: Started Session 17 of User zuul.
Oct  9 05:31:33 np0005478304 python3.9[75303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:34 np0005478304 python3.9[75459]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 05:31:34 np0005478304 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 05:31:34 np0005478304 python3.9[75613]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 05:31:35 np0005478304 python3.9[75768]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:35 np0005478304 python3.9[75921]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:36 np0005478304 python3.9[76075]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:37 np0005478304 python3.9[76230]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:37 np0005478304 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 05:31:37 np0005478304 systemd[1]: session-17.scope: Consumed 3.066s CPU time.
Oct  9 05:31:37 np0005478304 systemd-logind[743]: Session 17 logged out. Waiting for processes to exit.
Oct  9 05:31:37 np0005478304 systemd-logind[743]: Removed session 17.
Oct  9 05:31:42 np0005478304 systemd-logind[743]: New session 18 of user zuul.
Oct  9 05:31:42 np0005478304 systemd[1]: Started Session 18 of User zuul.
Oct  9 05:31:43 np0005478304 python3.9[76409]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:44 np0005478304 python3.9[76565]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 05:31:44 np0005478304 chronyd[61094]: Selected source 198.137.202.32 (pool.ntp.org)
Oct  9 05:31:44 np0005478304 python3.9[76649]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 05:31:46 np0005478304 python3.9[76800]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 05:31:47 np0005478304 python3.9[76953]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/reboot_required/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:47 np0005478304 python3.9[77105]: ansible-ansible.builtin.file Invoked with mode=0600 path=/var/lib/openstack/reboot_required/needs_restarting state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478304 python3.9[77257]: ansible-ansible.builtin.lineinfile Invoked with dest=/var/lib/openstack/reboot_required/needs_restarting line=Core libraries or services have been updated since boot-up:#012  * systemd#012#012Reboot is required to fully utilize these updates.#012More information: https://access.redhat.com/solutions/27943 path=/var/lib/openstack/reboot_required/needs_restarting state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 05:31:48 np0005478304 python3.9[77407]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 05:31:49 np0005478304 python3.9[77557]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:49 np0005478304 python3.9[77707]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 05:31:50 np0005478304 python3.9[77859]: ansible-ansible.legacy.setup Invoked with gather_subset=['min'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 05:31:50 np0005478304 python3.9[77972]: ansible-ansible.legacy.find Invoked with paths=['/sbin', '/bin', '/usr/sbin', '/usr/bin', '/usr/local/sbin'] patterns=['shutdown'] file_type=any read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:31:56 compute-2 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  9 09:31:56 compute-2 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:56 compute-2 kernel: BIOS-provided physical RAM map:
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  9 09:31:56 compute-2 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct  9 09:31:56 compute-2 kernel: NX (Execute Disable) protection: active
Oct  9 09:31:56 compute-2 kernel: APIC: Static calls initialized
Oct  9 09:31:56 compute-2 kernel: SMBIOS 2.8 present.
Oct  9 09:31:56 compute-2 kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct  9 09:31:56 compute-2 kernel: Hypervisor detected: KVM
Oct  9 09:31:56 compute-2 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  9 09:31:56 compute-2 kernel: kvm-clock: using sched offset of 1895964190071 cycles
Oct  9 09:31:56 compute-2 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  9 09:31:56 compute-2 kernel: tsc: Detected 2445.406 MHz processor
Oct  9 09:31:56 compute-2 kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct  9 09:31:56 compute-2 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  9 09:31:56 compute-2 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  9 09:31:56 compute-2 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct  9 09:31:56 compute-2 kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct  9 09:31:56 compute-2 kernel: Using GB pages for direct mapping
Oct  9 09:31:56 compute-2 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  9 09:31:56 compute-2 kernel: ACPI: Early table checksum verification disabled
Oct  9 09:31:56 compute-2 kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct  9 09:31:56 compute-2 kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct  9 09:31:56 compute-2 kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct  9 09:31:56 compute-2 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct  9 09:31:56 compute-2 kernel: No NUMA configuration found
Oct  9 09:31:56 compute-2 kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct  9 09:31:56 compute-2 kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct  9 09:31:56 compute-2 kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct  9 09:31:56 compute-2 kernel: Zone ranges:
Oct  9 09:31:56 compute-2 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  9 09:31:56 compute-2 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  9 09:31:56 compute-2 kernel:  Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:56 compute-2 kernel:  Device   empty
Oct  9 09:31:56 compute-2 kernel: Movable zone start for each node
Oct  9 09:31:56 compute-2 kernel: Early memory node ranges
Oct  9 09:31:56 compute-2 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  9 09:31:56 compute-2 kernel:  node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct  9 09:31:56 compute-2 kernel:  node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct  9 09:31:56 compute-2 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct  9 09:31:56 compute-2 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  9 09:31:56 compute-2 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  9 09:31:56 compute-2 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  9 09:31:56 compute-2 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  9 09:31:56 compute-2 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  9 09:31:56 compute-2 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  9 09:31:56 compute-2 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  9 09:31:56 compute-2 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  9 09:31:56 compute-2 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  9 09:31:56 compute-2 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  9 09:31:56 compute-2 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  9 09:31:56 compute-2 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  9 09:31:56 compute-2 kernel: TSC deadline timer available
Oct  9 09:31:56 compute-2 kernel: CPU topo: Max. logical packages:   4
Oct  9 09:31:56 compute-2 kernel: CPU topo: Max. logical dies:       4
Oct  9 09:31:56 compute-2 kernel: CPU topo: Max. dies per package:   1
Oct  9 09:31:56 compute-2 kernel: CPU topo: Max. threads per core:   1
Oct  9 09:31:56 compute-2 kernel: CPU topo: Num. cores per package:     1
Oct  9 09:31:56 compute-2 kernel: CPU topo: Num. threads per package:   1
Oct  9 09:31:56 compute-2 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct  9 09:31:56 compute-2 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  9 09:31:56 compute-2 kernel: kvm-guest: KVM setup pv remote TLB flush
Oct  9 09:31:56 compute-2 kernel: kvm-guest: setup PV sched yield
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  9 09:31:56 compute-2 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  9 09:31:56 compute-2 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct  9 09:31:56 compute-2 kernel: Booting paravirtualized kernel on KVM
Oct  9 09:31:56 compute-2 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  9 09:31:56 compute-2 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct  9 09:31:56 compute-2 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct  9 09:31:56 compute-2 kernel: kvm-guest: PV spinlocks enabled
Oct  9 09:31:56 compute-2 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:56 compute-2 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  9 09:31:56 compute-2 kernel: random: crng init done
Oct  9 09:31:56 compute-2 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: Fallback order for Node 0: 0 
Oct  9 09:31:56 compute-2 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  9 09:31:56 compute-2 kernel: Policy zone: Normal
Oct  9 09:31:56 compute-2 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  9 09:31:56 compute-2 kernel: software IO TLB: area num 4.
Oct  9 09:31:56 compute-2 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct  9 09:31:56 compute-2 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  9 09:31:56 compute-2 kernel: ftrace: allocated 193 pages with 3 groups
Oct  9 09:31:56 compute-2 kernel: Dynamic Preempt: voluntary
Oct  9 09:31:56 compute-2 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  9 09:31:56 compute-2 kernel: rcu: #011RCU event tracing is enabled.
Oct  9 09:31:56 compute-2 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct  9 09:31:56 compute-2 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-2 kernel: #011Rude variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-2 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  9 09:31:56 compute-2 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  9 09:31:56 compute-2 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct  9 09:31:56 compute-2 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-2 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-2 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct  9 09:31:56 compute-2 kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct  9 09:31:56 compute-2 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  9 09:31:56 compute-2 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  9 09:31:56 compute-2 kernel: Console: colour VGA+ 80x25
Oct  9 09:31:56 compute-2 kernel: printk: console [ttyS0] enabled
Oct  9 09:31:56 compute-2 kernel: ACPI: Core revision 20230331
Oct  9 09:31:56 compute-2 kernel: APIC: Switch to symmetric I/O mode setup
Oct  9 09:31:56 compute-2 kernel: x2apic enabled
Oct  9 09:31:56 compute-2 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  9 09:31:56 compute-2 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct  9 09:31:56 compute-2 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct  9 09:31:56 compute-2 kernel: kvm-guest: setup PV IPIs
Oct  9 09:31:56 compute-2 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  9 09:31:56 compute-2 kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct  9 09:31:56 compute-2 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  9 09:31:56 compute-2 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  9 09:31:56 compute-2 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  9 09:31:56 compute-2 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  9 09:31:56 compute-2 kernel: Spectre V2 : Mitigation: Retpolines
Oct  9 09:31:56 compute-2 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  9 09:31:56 compute-2 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct  9 09:31:56 compute-2 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  9 09:31:56 compute-2 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  9 09:31:56 compute-2 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  9 09:31:56 compute-2 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  9 09:31:56 compute-2 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  9 09:31:56 compute-2 kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct  9 09:31:56 compute-2 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  9 09:31:56 compute-2 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  9 09:31:56 compute-2 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  9 09:31:56 compute-2 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct  9 09:31:56 compute-2 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  9 09:31:56 compute-2 kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct  9 09:31:56 compute-2 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct  9 09:31:56 compute-2 kernel: Freeing SMP alternatives memory: 40K
Oct  9 09:31:56 compute-2 kernel: pid_max: default: 32768 minimum: 301
Oct  9 09:31:56 compute-2 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  9 09:31:56 compute-2 kernel: landlock: Up and running.
Oct  9 09:31:56 compute-2 kernel: Yama: becoming mindful.
Oct  9 09:31:56 compute-2 kernel: SELinux:  Initializing.
Oct  9 09:31:56 compute-2 kernel: LSM support for eBPF active
Oct  9 09:31:56 compute-2 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct  9 09:31:56 compute-2 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  9 09:31:56 compute-2 kernel: ... version:                0
Oct  9 09:31:56 compute-2 kernel: ... bit width:              48
Oct  9 09:31:56 compute-2 kernel: ... generic registers:      6
Oct  9 09:31:56 compute-2 kernel: ... value mask:             0000ffffffffffff
Oct  9 09:31:56 compute-2 kernel: ... max period:             00007fffffffffff
Oct  9 09:31:56 compute-2 kernel: ... fixed-purpose events:   0
Oct  9 09:31:56 compute-2 kernel: ... event mask:             000000000000003f
Oct  9 09:31:56 compute-2 kernel: signal: max sigframe size: 3376
Oct  9 09:31:56 compute-2 kernel: rcu: Hierarchical SRCU implementation.
Oct  9 09:31:56 compute-2 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  9 09:31:56 compute-2 kernel: smp: Bringing up secondary CPUs ...
Oct  9 09:31:56 compute-2 kernel: smpboot: x86: Booting SMP configuration:
Oct  9 09:31:56 compute-2 kernel: .... node  #0, CPUs:      #1 #2 #3
Oct  9 09:31:56 compute-2 kernel: smp: Brought up 1 node, 4 CPUs
Oct  9 09:31:56 compute-2 kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct  9 09:31:56 compute-2 kernel: node 0 deferred pages initialised in 16ms
Oct  9 09:31:56 compute-2 kernel: Memory: 7767908K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct  9 09:31:56 compute-2 kernel: devtmpfs: initialized
Oct  9 09:31:56 compute-2 kernel: x86/mm: Memory block size: 128MB
Oct  9 09:31:56 compute-2 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  9 09:31:56 compute-2 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: pinctrl core: initialized pinctrl subsystem
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  9 09:31:56 compute-2 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  9 09:31:56 compute-2 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  9 09:31:56 compute-2 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  9 09:31:56 compute-2 kernel: audit: initializing netlink subsys (disabled)
Oct  9 09:31:56 compute-2 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  9 09:31:56 compute-2 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  9 09:31:56 compute-2 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  9 09:31:56 compute-2 kernel: audit: type=2000 audit(1760002315.537:1): state=initialized audit_enabled=0 res=1
Oct  9 09:31:56 compute-2 kernel: cpuidle: using governor menu
Oct  9 09:31:56 compute-2 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  9 09:31:56 compute-2 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct  9 09:31:56 compute-2 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct  9 09:31:56 compute-2 kernel: PCI: Using configuration type 1 for base access
Oct  9 09:31:56 compute-2 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  9 09:31:56 compute-2 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  9 09:31:56 compute-2 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  9 09:31:56 compute-2 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  9 09:31:56 compute-2 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  9 09:31:56 compute-2 kernel: Demotion targets for Node 0: null
Oct  9 09:31:56 compute-2 kernel: cryptd: max_cpu_qlen set to 1000
Oct  9 09:31:56 compute-2 kernel: ACPI: Added _OSI(Module Device)
Oct  9 09:31:56 compute-2 kernel: ACPI: Added _OSI(Processor Device)
Oct  9 09:31:56 compute-2 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  9 09:31:56 compute-2 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  9 09:31:56 compute-2 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  9 09:31:56 compute-2 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  9 09:31:56 compute-2 kernel: ACPI: Interpreter enabled
Oct  9 09:31:56 compute-2 kernel: ACPI: PM: (supports S0 S5)
Oct  9 09:31:56 compute-2 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  9 09:31:56 compute-2 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  9 09:31:56 compute-2 kernel: PCI: Using E820 reservations for host bridge windows
Oct  9 09:31:56 compute-2 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  9 09:31:56 compute-2 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  9 09:31:56 compute-2 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct  9 09:31:56 compute-2 kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct  9 09:31:56 compute-2 kernel: PCI host bridge to bus 0000:00
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:02: extended config space not accessible
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [1] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [2] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [3] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [4] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [5] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [6] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [7] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [8] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [9] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [10] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [11] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [12] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [13] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [14] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [15] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [16] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [17] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [18] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [19] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [20] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [21] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [22] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [23] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [24] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [25] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [26] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [27] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [28] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [29] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [30] registered
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [31] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-2] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-3] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-4] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-5] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-6] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct  9 09:31:56 compute-2 kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-7] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-8] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-9] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-10] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-11] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-12] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-13] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-14] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-15] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-16] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-2 kernel: acpiphp: Slot [0-17] registered
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct  9 09:31:56 compute-2 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct  9 09:31:56 compute-2 kernel: iommu: Default domain type: Translated
Oct  9 09:31:56 compute-2 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  9 09:31:56 compute-2 kernel: SCSI subsystem initialized
Oct  9 09:31:56 compute-2 kernel: ACPI: bus type USB registered
Oct  9 09:31:56 compute-2 kernel: usbcore: registered new interface driver usbfs
Oct  9 09:31:56 compute-2 kernel: usbcore: registered new interface driver hub
Oct  9 09:31:56 compute-2 kernel: usbcore: registered new device driver usb
Oct  9 09:31:56 compute-2 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  9 09:31:56 compute-2 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  9 09:31:56 compute-2 kernel: PTP clock support registered
Oct  9 09:31:56 compute-2 kernel: EDAC MC: Ver: 3.0.0
Oct  9 09:31:56 compute-2 kernel: NetLabel: Initializing
Oct  9 09:31:56 compute-2 kernel: NetLabel:  domain hash size = 128
Oct  9 09:31:56 compute-2 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  9 09:31:56 compute-2 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  9 09:31:56 compute-2 kernel: PCI: Using ACPI for IRQ routing
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  9 09:31:56 compute-2 kernel: vgaarb: loaded
Oct  9 09:31:56 compute-2 kernel: clocksource: Switched to clocksource kvm-clock
Oct  9 09:31:56 compute-2 kernel: VFS: Disk quotas dquot_6.6.0
Oct  9 09:31:56 compute-2 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  9 09:31:56 compute-2 kernel: pnp: PnP ACPI init
Oct  9 09:31:56 compute-2 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct  9 09:31:56 compute-2 kernel: pnp: PnP ACPI: found 5 devices
Oct  9 09:31:56 compute-2 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_INET protocol family
Oct  9 09:31:56 compute-2 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  9 09:31:56 compute-2 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_XDP protocol family
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-2 kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct  9 09:31:56 compute-2 kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct  9 09:31:56 compute-2 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct  9 09:31:56 compute-2 kernel: PCI: CLS 0 bytes, default 64
Oct  9 09:31:56 compute-2 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  9 09:31:56 compute-2 kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct  9 09:31:56 compute-2 kernel: Trying to unpack rootfs image as initramfs...
Oct  9 09:31:56 compute-2 kernel: ACPI: bus type thunderbolt registered
Oct  9 09:31:56 compute-2 kernel: Initialise system trusted keyrings
Oct  9 09:31:56 compute-2 kernel: Key type blacklist registered
Oct  9 09:31:56 compute-2 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  9 09:31:56 compute-2 kernel: zbud: loaded
Oct  9 09:31:56 compute-2 kernel: integrity: Platform Keyring initialized
Oct  9 09:31:56 compute-2 kernel: integrity: Machine keyring initialized
Oct  9 09:31:56 compute-2 kernel: Freeing initrd memory: 86104K
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_ALG protocol family
Oct  9 09:31:56 compute-2 kernel: xor: automatically using best checksumming function   avx       
Oct  9 09:31:56 compute-2 kernel: Key type asymmetric registered
Oct  9 09:31:56 compute-2 kernel: Asymmetric key parser 'x509' registered
Oct  9 09:31:56 compute-2 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  9 09:31:56 compute-2 kernel: io scheduler mq-deadline registered
Oct  9 09:31:56 compute-2 kernel: io scheduler kyber registered
Oct  9 09:31:56 compute-2 kernel: io scheduler bfq registered
Oct  9 09:31:56 compute-2 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct  9 09:31:56 compute-2 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct  9 09:31:56 compute-2 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct  9 09:31:56 compute-2 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct  9 09:31:56 compute-2 kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct  9 09:31:56 compute-2 kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct  9 09:31:56 compute-2 kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct  9 09:31:56 compute-2 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  9 09:31:56 compute-2 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  9 09:31:56 compute-2 kernel: ACPI: button: Power Button [PWRF]
Oct  9 09:31:56 compute-2 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct  9 09:31:56 compute-2 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  9 09:31:56 compute-2 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  9 09:31:56 compute-2 kernel: Non-volatile memory driver v1.3
Oct  9 09:31:56 compute-2 kernel: rdac: device handler registered
Oct  9 09:31:56 compute-2 kernel: hp_sw: device handler registered
Oct  9 09:31:56 compute-2 kernel: emc: device handler registered
Oct  9 09:31:56 compute-2 kernel: alua: device handler registered
Oct  9 09:31:56 compute-2 kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct  9 09:31:56 compute-2 kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct  9 09:31:56 compute-2 kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct  9 09:31:56 compute-2 kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct  9 09:31:56 compute-2 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  9 09:31:56 compute-2 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  9 09:31:56 compute-2 kernel: usb usb1: Product: UHCI Host Controller
Oct  9 09:31:56 compute-2 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  9 09:31:56 compute-2 kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct  9 09:31:56 compute-2 kernel: hub 1-0:1.0: USB hub found
Oct  9 09:31:56 compute-2 kernel: hub 1-0:1.0: 2 ports detected
Oct  9 09:31:56 compute-2 kernel: usbcore: registered new interface driver usbserial_generic
Oct  9 09:31:56 compute-2 kernel: usbserial: USB Serial support registered for generic
Oct  9 09:31:56 compute-2 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  9 09:31:56 compute-2 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  9 09:31:56 compute-2 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  9 09:31:56 compute-2 kernel: mousedev: PS/2 mouse device common for all mice
Oct  9 09:31:56 compute-2 kernel: rtc_cmos 00:03: RTC can wake from S4
Oct  9 09:31:56 compute-2 kernel: rtc_cmos 00:03: registered as rtc0
Oct  9 09:31:56 compute-2 kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:31:56 UTC (1760002316)
Oct  9 09:31:56 compute-2 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct  9 09:31:56 compute-2 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  9 09:31:56 compute-2 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  9 09:31:56 compute-2 kernel: usbcore: registered new interface driver usbhid
Oct  9 09:31:56 compute-2 kernel: usbhid: USB HID core driver
Oct  9 09:31:56 compute-2 kernel: drop_monitor: Initializing network drop monitor service
Oct  9 09:31:56 compute-2 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  9 09:31:56 compute-2 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  9 09:31:56 compute-2 kernel: Initializing XFRM netlink socket
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_INET6 protocol family
Oct  9 09:31:56 compute-2 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  9 09:31:56 compute-2 kernel: Segment Routing with IPv6
Oct  9 09:31:56 compute-2 kernel: NET: Registered PF_PACKET protocol family
Oct  9 09:31:56 compute-2 kernel: mpls_gso: MPLS GSO support
Oct  9 09:31:56 compute-2 kernel: IPI shorthand broadcast: enabled
Oct  9 09:31:56 compute-2 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  9 09:31:56 compute-2 kernel: AES CTR mode by8 optimization enabled
Oct  9 09:31:56 compute-2 kernel: sched_clock: Marking stable (1017001537, 145248786)->(1400827785, -238577462)
Oct  9 09:31:56 compute-2 kernel: registered taskstats version 1
Oct  9 09:31:56 compute-2 kernel: Loading compiled-in X.509 certificates
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  9 09:31:56 compute-2 kernel: Demotion targets for Node 0: null
Oct  9 09:31:56 compute-2 kernel: page_owner is disabled
Oct  9 09:31:56 compute-2 kernel: Key type .fscrypt registered
Oct  9 09:31:56 compute-2 kernel: Key type fscrypt-provisioning registered
Oct  9 09:31:56 compute-2 kernel: Key type big_key registered
Oct  9 09:31:56 compute-2 kernel: Key type encrypted registered
Oct  9 09:31:56 compute-2 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  9 09:31:56 compute-2 kernel: Loading compiled-in module X.509 certificates
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  9 09:31:56 compute-2 kernel: ima: Allocated hash algorithm: sha256
Oct  9 09:31:56 compute-2 kernel: ima: No architecture policies found
Oct  9 09:31:56 compute-2 kernel: evm: Initialising EVM extended attributes:
Oct  9 09:31:56 compute-2 kernel: evm: security.selinux
Oct  9 09:31:56 compute-2 kernel: evm: security.SMACK64 (disabled)
Oct  9 09:31:56 compute-2 kernel: evm: security.SMACK64EXEC (disabled)
Oct  9 09:31:56 compute-2 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  9 09:31:56 compute-2 kernel: evm: security.SMACK64MMAP (disabled)
Oct  9 09:31:56 compute-2 kernel: evm: security.apparmor (disabled)
Oct  9 09:31:56 compute-2 kernel: evm: security.ima
Oct  9 09:31:56 compute-2 kernel: evm: security.capability
Oct  9 09:31:56 compute-2 kernel: evm: HMAC attrs: 0x1
Oct  9 09:31:56 compute-2 kernel: Running certificate verification RSA selftest
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  9 09:31:56 compute-2 kernel: Running certificate verification ECDSA selftest
Oct  9 09:31:56 compute-2 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  9 09:31:56 compute-2 kernel: clk: Disabling unused clocks
Oct  9 09:31:56 compute-2 kernel: Freeing unused decrypted memory: 2028K
Oct  9 09:31:56 compute-2 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  9 09:31:56 compute-2 kernel: Write protecting the kernel read-only data: 30720k
Oct  9 09:31:56 compute-2 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  9 09:31:56 compute-2 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  9 09:31:56 compute-2 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  9 09:31:56 compute-2 kernel: Run /init as init process
Oct  9 09:31:56 compute-2 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:31:56 compute-2 systemd: Detected virtualization kvm.
Oct  9 09:31:56 compute-2 systemd: Detected architecture x86-64.
Oct  9 09:31:56 compute-2 systemd: Running in initrd.
Oct  9 09:31:56 compute-2 systemd: No hostname configured, using default hostname.
Oct  9 09:31:56 compute-2 systemd: Hostname set to <localhost>.
Oct  9 09:31:56 compute-2 systemd: Initializing machine ID from VM UUID.
Oct  9 09:31:56 compute-2 systemd: Queued start job for default target Initrd Default Target.
Oct  9 09:31:56 compute-2 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:56 compute-2 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:31:56 compute-2 systemd: Reached target Initrd /usr File System.
Oct  9 09:31:56 compute-2 systemd: Reached target Local File Systems.
Oct  9 09:31:56 compute-2 systemd: Reached target Path Units.
Oct  9 09:31:56 compute-2 systemd: Reached target Slice Units.
Oct  9 09:31:56 compute-2 systemd: Reached target Swaps.
Oct  9 09:31:56 compute-2 systemd: Reached target Timer Units.
Oct  9 09:31:56 compute-2 systemd: Listening on D-Bus System Message Bus Socket.
Oct  9 09:31:56 compute-2 systemd: Listening on Journal Socket (/dev/log).
Oct  9 09:31:56 compute-2 systemd: Listening on Journal Socket.
Oct  9 09:31:56 compute-2 systemd: Listening on udev Control Socket.
Oct  9 09:31:56 compute-2 systemd: Listening on udev Kernel Socket.
Oct  9 09:31:56 compute-2 systemd: Reached target Socket Units.
Oct  9 09:31:56 compute-2 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:31:56 compute-2 systemd: Starting Journal Service...
Oct  9 09:31:56 compute-2 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  9 09:31:56 compute-2 systemd: Starting Apply Kernel Variables...
Oct  9 09:31:56 compute-2 systemd: Starting Create System Users...
Oct  9 09:31:56 compute-2 systemd: Starting Setup Virtual Console...
Oct  9 09:31:56 compute-2 systemd: Finished Create List of Static Device Nodes.
Oct  9 09:31:56 compute-2 systemd: Finished Apply Kernel Variables.
Oct  9 09:31:56 compute-2 systemd: Finished Create System Users.
Oct  9 09:31:56 compute-2 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  9 09:31:56 compute-2 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  9 09:31:56 compute-2 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  9 09:31:56 compute-2 kernel: usb 1-1: Manufacturer: QEMU
Oct  9 09:31:56 compute-2 kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct  9 09:31:56 compute-2 systemd: Starting Create Static Device Nodes in /dev...
Oct  9 09:31:56 compute-2 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  9 09:31:56 compute-2 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct  9 09:31:56 compute-2 systemd-journald[282]: Journal started
Oct  9 09:31:56 compute-2 systemd-journald[282]: Runtime Journal (/run/log/journal/ed71292475ec452aa842ae61b9b9ed0c) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:31:56 compute-2 systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct  9 09:31:56 compute-2 systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct  9 09:31:56 compute-2 systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  9 09:31:56 compute-2 systemd: Started Journal Service.
Oct  9 09:31:56 compute-2 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:31:57 compute-2 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:31:57 compute-2 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:31:57 compute-2 systemd[1]: Finished Setup Virtual Console.
Oct  9 09:31:57 compute-2 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  9 09:31:57 compute-2 systemd[1]: Starting dracut cmdline hook...
Oct  9 09:31:57 compute-2 dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Oct  9 09:31:57 compute-2 dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  9 09:31:57 compute-2 systemd[1]: Finished dracut cmdline hook.
Oct  9 09:31:57 compute-2 systemd[1]: Starting dracut pre-udev hook...
Oct  9 09:31:57 compute-2 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  9 09:31:57 compute-2 kernel: device-mapper: uevent: version 1.0.3
Oct  9 09:31:57 compute-2 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  9 09:31:57 compute-2 kernel: RPC: Registered named UNIX socket transport module.
Oct  9 09:31:57 compute-2 kernel: RPC: Registered udp transport module.
Oct  9 09:31:57 compute-2 kernel: RPC: Registered tcp transport module.
Oct  9 09:31:57 compute-2 kernel: RPC: Registered tcp-with-tls transport module.
Oct  9 09:31:57 compute-2 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  9 09:31:57 compute-2 rpc.statd[416]: Version 2.5.4 starting
Oct  9 09:31:57 compute-2 rpc.statd[416]: Initializing NSM state
Oct  9 09:31:57 compute-2 rpc.idmapd[421]: Setting log level to 0
Oct  9 09:31:57 compute-2 systemd[1]: Finished dracut pre-udev hook.
Oct  9 09:31:57 compute-2 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:31:57 compute-2 systemd-udevd[434]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:31:57 compute-2 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:31:57 compute-2 systemd[1]: Starting dracut pre-trigger hook...
Oct  9 09:31:57 compute-2 systemd[1]: Finished dracut pre-trigger hook.
Oct  9 09:31:57 compute-2 systemd[1]: Starting Coldplug All udev Devices...
Oct  9 09:31:57 compute-2 systemd[1]: Created slice Slice /system/modprobe.
Oct  9 09:31:57 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:31:57 compute-2 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:31:57 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:57 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:57 compute-2 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Network.
Oct  9 09:31:57 compute-2 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  9 09:31:57 compute-2 systemd[1]: Starting dracut initqueue hook...
Oct  9 09:31:57 compute-2 kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct  9 09:31:57 compute-2 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  9 09:31:57 compute-2 kernel: vda: vda1
Oct  9 09:31:57 compute-2 systemd-udevd[446]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:57 compute-2 systemd-udevd[477]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:57 compute-2 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct  9 09:31:57 compute-2 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct  9 09:31:57 compute-2 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct  9 09:31:57 compute-2 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct  9 09:31:57 compute-2 kernel: scsi host0: ahci
Oct  9 09:31:57 compute-2 kernel: scsi host1: ahci
Oct  9 09:31:57 compute-2 kernel: scsi host2: ahci
Oct  9 09:31:57 compute-2 kernel: scsi host3: ahci
Oct  9 09:31:57 compute-2 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Initrd Root Device.
Oct  9 09:31:57 compute-2 kernel: scsi host4: ahci
Oct  9 09:31:57 compute-2 kernel: scsi host5: ahci
Oct  9 09:31:57 compute-2 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 52 lpm-pol 0
Oct  9 09:31:57 compute-2 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct  9 09:31:57 compute-2 kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-2 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  9 09:31:57 compute-2 kernel: ata1.00: applying bridge limits
Oct  9 09:31:57 compute-2 kernel: ata1.00: configured for UDMA/100
Oct  9 09:31:57 compute-2 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  9 09:31:57 compute-2 systemd[1]: Mounting Kernel Configuration File System...
Oct  9 09:31:57 compute-2 kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-2 kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-2 kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-2 kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct  9 09:31:57 compute-2 systemd[1]: Mounted Kernel Configuration File System.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target System Initialization.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Basic System.
Oct  9 09:31:57 compute-2 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  9 09:31:57 compute-2 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  9 09:31:57 compute-2 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  9 09:31:57 compute-2 systemd[1]: Finished dracut initqueue hook.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  9 09:31:57 compute-2 systemd[1]: Reached target Remote File Systems.
Oct  9 09:31:58 compute-2 systemd[1]: Starting dracut pre-mount hook...
Oct  9 09:31:58 compute-2 systemd[1]: Finished dracut pre-mount hook.
Oct  9 09:31:58 compute-2 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  9 09:31:58 compute-2 systemd-fsck[531]: /usr/sbin/fsck.xfs: XFS file system.
Oct  9 09:31:58 compute-2 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  9 09:31:58 compute-2 systemd[1]: Mounting /sysroot...
Oct  9 09:31:58 compute-2 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  9 09:31:58 compute-2 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  9 09:31:58 compute-2 kernel: XFS (vda1): Ending clean mount
Oct  9 09:31:58 compute-2 systemd[1]: Mounted /sysroot.
Oct  9 09:31:58 compute-2 systemd[1]: Reached target Initrd Root File System.
Oct  9 09:31:58 compute-2 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  9 09:31:58 compute-2 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  9 09:31:58 compute-2 systemd[1]: Reached target Initrd File Systems.
Oct  9 09:31:58 compute-2 systemd[1]: Reached target Initrd Default Target.
Oct  9 09:31:58 compute-2 systemd[1]: Starting dracut mount hook...
Oct  9 09:31:58 compute-2 systemd[1]: Finished dracut mount hook.
Oct  9 09:31:58 compute-2 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  9 09:31:58 compute-2 rpc.idmapd[421]: exiting on signal 15
Oct  9 09:31:58 compute-2 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  9 09:31:58 compute-2 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Network.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Timer Units.
Oct  9 09:31:58 compute-2 systemd[1]: dbus.socket: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Initrd Default Target.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Basic System.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Initrd Root Device.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Initrd /usr File System.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Path Units.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Remote File Systems.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Slice Units.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Socket Units.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target System Initialization.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Local File Systems.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Swaps.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut mount hook.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut pre-mount hook.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut initqueue hook.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Apply Kernel Variables.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Coldplug All udev Devices.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut pre-trigger hook.
Oct  9 09:31:58 compute-2 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  9 09:31:58 compute-2 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Setup Virtual Console.
Oct  9 09:31:58 compute-2 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Closed udev Control Socket.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Closed udev Kernel Socket.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut pre-udev hook.
Oct  9 09:31:58 compute-2 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped dracut cmdline hook.
Oct  9 09:31:58 compute-2 systemd[1]: Starting Cleanup udev Database...
Oct  9 09:31:58 compute-2 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  9 09:31:58 compute-2 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  9 09:31:58 compute-2 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Stopped Create System Users.
Oct  9 09:31:58 compute-2 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  9 09:31:58 compute-2 systemd[1]: Finished Cleanup udev Database.
Oct  9 09:31:58 compute-2 systemd[1]: Reached target Switch Root.
Oct  9 09:31:58 compute-2 systemd[1]: Starting Switch Root...
Oct  9 09:31:58 compute-2 systemd[1]: Switching root.
Oct  9 09:31:58 compute-2 systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Oct  9 09:31:58 compute-2 systemd-journald[282]: Journal stopped
Oct  9 09:31:59 compute-2 kernel: audit: type=1404 audit(1760002318.740:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:31:59 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:31:59 compute-2 kernel: audit: type=1403 audit(1760002318.849:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  9 09:31:59 compute-2 systemd: Successfully loaded SELinux policy in 111.606ms.
Oct  9 09:31:59 compute-2 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.659ms.
Oct  9 09:31:59 compute-2 systemd: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  9 09:31:59 compute-2 systemd: Detected virtualization kvm.
Oct  9 09:31:59 compute-2 systemd: Detected architecture x86-64.
Oct  9 09:31:59 compute-2 systemd: Hostname set to <compute-2>.
Oct  9 09:31:59 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:31:59 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:31:59 compute-2 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd: Stopped Switch Root.
Oct  9 09:31:59 compute-2 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:31:59 compute-2 systemd: Created slice Slice /system/getty.
Oct  9 09:31:59 compute-2 systemd: Created slice Slice /system/serial-getty.
Oct  9 09:31:59 compute-2 systemd: Created slice Slice /system/sshd-keygen.
Oct  9 09:31:59 compute-2 systemd: Created slice User and Session Slice.
Oct  9 09:31:59 compute-2 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  9 09:31:59 compute-2 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  9 09:31:59 compute-2 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  9 09:31:59 compute-2 systemd: Reached target Local Encrypted Volumes.
Oct  9 09:31:59 compute-2 systemd: Stopped target Switch Root.
Oct  9 09:31:59 compute-2 systemd: Stopped target Initrd File Systems.
Oct  9 09:31:59 compute-2 systemd: Stopped target Initrd Root File System.
Oct  9 09:31:59 compute-2 systemd: Reached target Local Integrity Protected Volumes.
Oct  9 09:31:59 compute-2 systemd: Reached target Path Units.
Oct  9 09:31:59 compute-2 systemd: Reached target rpc_pipefs.target.
Oct  9 09:31:59 compute-2 systemd: Reached target Slice Units.
Oct  9 09:31:59 compute-2 systemd: Reached target Local Verity Protected Volumes.
Oct  9 09:31:59 compute-2 systemd: Listening on Device-mapper event daemon FIFOs.
Oct  9 09:31:59 compute-2 systemd: Listening on LVM2 poll daemon socket.
Oct  9 09:31:59 compute-2 systemd: Listening on RPCbind Server Activation Socket.
Oct  9 09:31:59 compute-2 systemd: Reached target RPC Port Mapper.
Oct  9 09:31:59 compute-2 systemd: Listening on Process Core Dump Socket.
Oct  9 09:31:59 compute-2 systemd: Listening on initctl Compatibility Named Pipe.
Oct  9 09:31:59 compute-2 systemd: Listening on udev Control Socket.
Oct  9 09:31:59 compute-2 systemd: Listening on udev Kernel Socket.
Oct  9 09:31:59 compute-2 systemd: Mounting Huge Pages File System...
Oct  9 09:31:59 compute-2 systemd: Mounting /dev/hugepages1G...
Oct  9 09:31:59 compute-2 systemd: Mounting /dev/hugepages2M...
Oct  9 09:31:59 compute-2 systemd: Mounting POSIX Message Queue File System...
Oct  9 09:31:59 compute-2 systemd: Mounting Kernel Debug File System...
Oct  9 09:31:59 compute-2 systemd: Mounting Kernel Trace File System...
Oct  9 09:31:59 compute-2 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:31:59 compute-2 systemd: Starting Create List of Static Device Nodes...
Oct  9 09:31:59 compute-2 systemd: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  9 09:31:59 compute-2 systemd: Starting Load Kernel Module configfs...
Oct  9 09:31:59 compute-2 systemd: Starting Load Kernel Module drm...
Oct  9 09:31:59 compute-2 systemd: Starting Load Kernel Module efi_pstore...
Oct  9 09:31:59 compute-2 systemd: Starting Load Kernel Module fuse...
Oct  9 09:31:59 compute-2 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  9 09:31:59 compute-2 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd: Stopped File System Check on Root Device.
Oct  9 09:31:59 compute-2 systemd: Stopped Journal Service.
Oct  9 09:31:59 compute-2 systemd: Starting Journal Service...
Oct  9 09:31:59 compute-2 systemd: Starting Load Kernel Modules...
Oct  9 09:31:59 compute-2 kernel: fuse: init (API version 7.37)
Oct  9 09:31:59 compute-2 systemd: Starting Generate network units from Kernel command line...
Oct  9 09:31:59 compute-2 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:31:59 compute-2 systemd: Starting Remount Root and Kernel File Systems...
Oct  9 09:31:59 compute-2 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd: Starting Coldplug All udev Devices...
Oct  9 09:31:59 compute-2 systemd-journald[663]: Journal started
Oct  9 09:31:59 compute-2 systemd-journald[663]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct  9 09:31:59 compute-2 systemd[1]: Queued start job for default target Multi-User System.
Oct  9 09:31:59 compute-2 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 kernel: ACPI: bus type drm_connector registered
Oct  9 09:31:59 compute-2 systemd: Started Journal Service.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted Huge Pages File System.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted /dev/hugepages1G.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted /dev/hugepages2M.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted POSIX Message Queue File System.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted Kernel Debug File System.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted Kernel Trace File System.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Create List of Static Device Nodes.
Oct  9 09:31:59 compute-2 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  9 09:31:59 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  9 09:31:59 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:59 compute-2 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module drm.
Oct  9 09:31:59 compute-2 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  9 09:31:59 compute-2 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module fuse.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Generate network units from Kernel command line.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  9 09:31:59 compute-2 systemd[1]: Activating swap /swap...
Oct  9 09:31:59 compute-2 systemd[1]: Mounting FUSE Control File System...
Oct  9 09:31:59 compute-2 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:31:59 compute-2 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct  9 09:31:59 compute-2 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  9 09:31:59 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  9 09:31:59 compute-2 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  9 09:31:59 compute-2 systemd[1]: Starting Load/Save OS Random Seed...
Oct  9 09:31:59 compute-2 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  9 09:31:59 compute-2 systemd[1]: Activated swap /swap.
Oct  9 09:31:59 compute-2 systemd[1]: Mounted FUSE Control File System.
Oct  9 09:31:59 compute-2 systemd[1]: Reached target Swaps.
Oct  9 09:31:59 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  9 09:31:59 compute-2 systemd-journald[663]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 8.834ms for 1155 entries.
Oct  9 09:31:59 compute-2 systemd-journald[663]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct  9 09:31:59 compute-2 systemd-journald[663]: Received client request to flush runtime journal.
Oct  9 09:31:59 compute-2 kernel: Bridge firewalling registered
Oct  9 09:31:59 compute-2 systemd-modules-load[664]: Inserted module 'br_netfilter'
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load/Save OS Random Seed.
Oct  9 09:31:59 compute-2 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  9 09:31:59 compute-2 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  9 09:31:59 compute-2 systemd-modules-load[664]: Inserted module 'nf_conntrack'
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Apply Kernel Variables...
Oct  9 09:31:59 compute-2 systemd[1]: Finished Coldplug All udev Devices.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Apply Kernel Variables.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  9 09:31:59 compute-2 systemd[1]: Reached target Preparation for Local File Systems.
Oct  9 09:31:59 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:31:59 compute-2 systemd[1]: Reached target Local File Systems.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Import network configuration from initramfs...
Oct  9 09:31:59 compute-2 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  9 09:31:59 compute-2 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Automatic Boot Loader Update...
Oct  9 09:31:59 compute-2 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  9 09:31:59 compute-2 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  9 09:31:59 compute-2 bootctl[679]: Couldn't find EFI system partition, skipping.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Automatic Boot Loader Update.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Import network configuration from initramfs.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Create Volatile Files and Directories...
Oct  9 09:31:59 compute-2 systemd-udevd[681]: Using default interface naming scheme 'rhel-9.0'.
Oct  9 09:31:59 compute-2 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct  9 09:31:59 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct  9 09:31:59 compute-2 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  9 09:31:59 compute-2 systemd[1]: Finished Create Volatile Files and Directories.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Security Auditing Service...
Oct  9 09:31:59 compute-2 systemd[1]: Starting RPC Bind...
Oct  9 09:31:59 compute-2 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct  9 09:31:59 compute-2 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct  9 09:31:59 compute-2 systemd-udevd[700]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:59 compute-2 auditd[732]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  9 09:31:59 compute-2 auditd[732]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  9 09:31:59 compute-2 systemd[1]: Started RPC Bind.
Oct  9 09:31:59 compute-2 augenrules[738]: /sbin/augenrules: No change
Oct  9 09:31:59 compute-2 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  9 09:31:59 compute-2 augenrules[756]: No rules
Oct  9 09:31:59 compute-2 augenrules[756]: enabled 1
Oct  9 09:31:59 compute-2 augenrules[756]: failure 1
Oct  9 09:31:59 compute-2 augenrules[756]: pid 732
Oct  9 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct  9 09:31:59 compute-2 augenrules[756]: lost 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog 4
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-2 augenrules[756]: enabled 1
Oct  9 09:31:59 compute-2 augenrules[756]: failure 1
Oct  9 09:31:59 compute-2 augenrules[756]: pid 732
Oct  9 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct  9 09:31:59 compute-2 augenrules[756]: lost 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog 8
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-2 augenrules[756]: enabled 1
Oct  9 09:31:59 compute-2 augenrules[756]: failure 1
Oct  9 09:31:59 compute-2 augenrules[756]: pid 732
Oct  9 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct  9 09:31:59 compute-2 augenrules[756]: lost 0
Oct  9 09:31:59 compute-2 augenrules[756]: backlog 12
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct  9 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct  9 09:31:59 compute-2 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct  9 09:31:59 compute-2 systemd[1]: Started Security Auditing Service.
Oct  9 09:31:59 compute-2 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  9 09:31:59 compute-2 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  9 09:31:59 compute-2 kernel: iTCO_vendor_support: vendor-support=0
Oct  9 09:31:59 compute-2 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct  9 09:31:59 compute-2 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct  9 09:31:59 compute-2 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct  9 09:31:59 compute-2 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  9 09:31:59 compute-2 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  9 09:31:59 compute-2 systemd-udevd[708]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:31:59 compute-2 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct  9 09:31:59 compute-2 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct  9 09:31:59 compute-2 kernel: Console: switching to colour dummy device 80x25
Oct  9 09:31:59 compute-2 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  9 09:31:59 compute-2 kernel: [drm] features: -context_init
Oct  9 09:31:59 compute-2 kernel: [drm] number of scanouts: 1
Oct  9 09:31:59 compute-2 kernel: [drm] number of cap sets: 0
Oct  9 09:31:59 compute-2 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct  9 09:31:59 compute-2 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  9 09:31:59 compute-2 kernel: Console: switching to colour frame buffer device 160x50
Oct  9 09:31:59 compute-2 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  9 09:31:59 compute-2 kernel: kvm_amd: TSC scaling supported
Oct  9 09:31:59 compute-2 kernel: kvm_amd: Nested Virtualization enabled
Oct  9 09:31:59 compute-2 kernel: kvm_amd: Nested Paging enabled
Oct  9 09:31:59 compute-2 kernel: kvm_amd: LBR virtualization supported
Oct  9 09:31:59 compute-2 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct  9 09:31:59 compute-2 kernel: kvm_amd: Virtual GIF supported
Oct  9 09:32:00 compute-2 systemd[1]: Reached target System Initialization.
Oct  9 09:32:00 compute-2 systemd[1]: Started dnf makecache --timer.
Oct  9 09:32:00 compute-2 systemd[1]: Started Daily rotation of log files.
Oct  9 09:32:00 compute-2 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct  9 09:32:00 compute-2 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct  9 09:32:00 compute-2 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  9 09:32:00 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  9 09:32:00 compute-2 systemd[1]: Reached target Timer Units.
Oct  9 09:32:00 compute-2 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  9 09:32:00 compute-2 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  9 09:32:00 compute-2 systemd[1]: Reached target Socket Units.
Oct  9 09:32:00 compute-2 systemd[1]: Starting D-Bus System Message Bus...
Oct  9 09:32:00 compute-2 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:00 compute-2 systemd[1]: Started D-Bus System Message Bus.
Oct  9 09:32:00 compute-2 systemd[1]: Reached target Basic System.
Oct  9 09:32:00 compute-2 dbus-broker-lau[791]: Ready
Oct  9 09:32:00 compute-2 systemd[1]: Starting NTP client/server...
Oct  9 09:32:00 compute-2 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  9 09:32:00 compute-2 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  9 09:32:00 compute-2 systemd[1]: Started irqbalance daemon.
Oct  9 09:32:00 compute-2 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  9 09:32:00 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:32:00 compute-2 systemd[1]: Starting Netfilter Tables...
Oct  9 09:32:00 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:32:00 compute-2 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:32:00 compute-2 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  9 09:32:00 compute-2 systemd[1]: Reached target User and Group Name Lookups.
Oct  9 09:32:00 compute-2 systemd[1]: Starting Resets System Activity Logs...
Oct  9 09:32:00 compute-2 systemd[1]: Starting User Login Management...
Oct  9 09:32:00 compute-2 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  9 09:32:00 compute-2 systemd[1]: Finished Resets System Activity Logs.
Oct  9 09:32:00 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:32:00 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:32:00 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:32:00 compute-2 systemd-logind[800]: New seat seat0.
Oct  9 09:32:00 compute-2 chronyd[807]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  9 09:32:00 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:32:00 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:32:00 compute-2 chronyd[807]: Frequency -10.736 +/- 0.303 ppm read from /var/lib/chrony/drift
Oct  9 09:32:00 compute-2 chronyd[807]: Loaded seccomp filter (level 2)
Oct  9 09:32:00 compute-2 systemd[1]: Started User Login Management.
Oct  9 09:32:00 compute-2 systemd[1]: Started NTP client/server.
Oct  9 09:32:00 compute-2 systemd[1]: Finished Netfilter Tables.
Oct  9 09:32:00 compute-2 cloud-init[826]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:32:00 +0000. Up 5.27 seconds.
Oct  9 09:32:00 compute-2 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  9 09:32:00 compute-2 systemd[1]: Reached target Preparation for Network.
Oct  9 09:32:00 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Oct  9 09:32:00 compute-2 chown[828]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  9 09:32:01 compute-2 ovs-ctl[833]: Starting ovsdb-server [  OK  ]
Oct  9 09:32:01 compute-2 ovs-vsctl[882]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  9 09:32:01 compute-2 ovs-vsctl[892]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c24becb7-a313-4586-a73e-1530a4367da3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  9 09:32:01 compute-2 ovs-ctl[833]: Configuring Open vSwitch system IDs [  OK  ]
Oct  9 09:32:01 compute-2 ovs-vsctl[898]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  9 09:32:01 compute-2 ovs-ctl[833]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:01 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  9 09:32:01 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  9 09:32:01 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Oct  9 09:32:01 compute-2 ovs-ctl[942]: Inserting openvswitch module [  OK  ]
Oct  9 09:32:01 compute-2 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: Timeout policy base is empty
Oct  9 09:32:01 compute-2 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:01 compute-2 systemd-udevd[720]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:01 compute-2 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:01 compute-2 ovs-ctl[911]: Starting ovs-vswitchd [  OK  ]
Oct  9 09:32:01 compute-2 ovs-vsctl[982]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct  9 09:32:01 compute-2 ovs-ctl[911]: Enabling remote OVSDB managers [  OK  ]
Oct  9 09:32:01 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Open vSwitch...
Oct  9 09:32:01 compute-2 systemd[1]: Finished Open vSwitch.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Network Manager...
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4294] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4e5c1d91-f962-4a1e-8648-9ceaaac75860)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4297] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4383] manager[0x55e6fe2ab040]: monitoring kernel firmware directory '/lib/firmware'.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Hostname Service...
Oct  9 09:32:01 compute-2 systemd[1]: Started Hostname Service.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4944] hostname: hostname: using hostnamed
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4945] hostname: static hostname changed from (none) to "compute-2"
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4949] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5035] manager[0x55e6fe2ab040]: rfkill: Wi-Fi hardware radio set enabled
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5035] manager[0x55e6fe2ab040]: rfkill: WWAN hardware radio set enabled
Oct  9 09:32:01 compute-2 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5073] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5089] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5090] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5091] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5091] manager: Networking is enabled by state file
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5096] settings: Loaded settings plugin: keyfile (internal)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5116] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5179] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5195] dhcp: init: Using DHCP client 'internal'
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5197] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5206] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:01 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5218] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5224] device (lo): Activation: starting connection 'lo' (726b4f2c-1759-468e-9885-9a46134e929b)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5231] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5233] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5252] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5254] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5265] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5267] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5278] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5280] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5291] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5293] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5309] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5312] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5326] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5328] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5334] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5336] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5341] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5343] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5349] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5352] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5364] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5367] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5372] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5374] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5379] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/14)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5381] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5386] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5388] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 systemd[1]: Started Network Manager.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5394] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5399] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5409] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5410] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5412] device (eth0): carrier: link connected
Oct  9 09:32:01 compute-2 systemd[1]: Reached target Network.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5413] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5421] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5422] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5423] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5424] device (eth1): carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5429] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5433] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5438] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5441] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5444] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5447] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5450] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5454] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5455] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5456] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5457] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5458] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5460] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5461] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5464] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5466] policy: auto-activating connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5467] policy: auto-activating connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5467] policy: auto-activating connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5468] policy: auto-activating connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5469] policy: auto-activating connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5470] policy: auto-activating connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5470] policy: auto-activating connection 'br-ex-br' (e74d22ec-f198-4e0f-be1a-1f80d32c41d9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5472] policy: auto-activating connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5472] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5476] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5477] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5479] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5481] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5482] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5484] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5485] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct  9 09:32:01 compute-2 kernel: vlan22: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5489] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e74d22ec-f198-4e0f-be1a-1f80d32c41d9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5490] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct  9 09:32:01 compute-2 systemd[1]: Starting Network Manager Wait Online...
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5491] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:32:01 compute-2 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5534] device (lo): Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5541] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5543] manager: NetworkManager state is now CONNECTING
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5543] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5556] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5558] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5560] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 09:32:01 compute-2 kernel: vlan23: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5575] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5576] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5578] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5579] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5591] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5594] device (eth1)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5602] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5602] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5612] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5613] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5619] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5619] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5622] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5623] device (vlan23)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5626] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5627] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5628] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5629] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5630] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5634] device (eth1): disconnecting for new activation request.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5635] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5643] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5647] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  9 09:32:01 compute-2 kernel: vlan20: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5669] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5687] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5698] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5716] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5718] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  9 09:32:01 compute-2 systemd[1]: Reached target NFS client services.
Oct  9 09:32:01 compute-2 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5736] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5747] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 systemd[1]: Reached target Remote File Systems.
Oct  9 09:32:01 compute-2 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5762] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct  9 09:32:01 compute-2 kernel: vlan21: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5807] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5820] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5837] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5847] policy: auto-activating connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5850] policy: auto-activating connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5852] policy: auto-activating connection 'vlan23-if' (0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5856] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5860] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5861] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5861] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5862] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5865] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 kernel: ovs-system: left promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5871] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5871] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5873] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5877] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5878] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5878] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5887] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5894] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5900] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5902] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5907] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5909] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5913] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5914] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5932] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5935] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5940] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5947] policy: auto-activating connection 'vlan22-if' (508ad833-78ea-45c2-a626-2e5f91da07cd)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6001] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6006] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6012] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6013] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6023] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6029] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6038] policy: auto-activating connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6044] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6048] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6051] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6052] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6055] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6061] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6063] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6064] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6066] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6070] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6072] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6073] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6076] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6081] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6085] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6090] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6101] policy: auto-activating connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6104] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6118] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6124] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (508ad833-78ea-45c2-a626-2e5f91da07cd)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6124] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6126] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6129] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6134] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6135] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6142] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 kernel: ovs-system: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: No such timeout policy "ovs_test_tp"
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6146] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6148] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6149] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6151] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6156] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6158] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6162] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6164] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6166] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6167] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6168] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6169] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6173] policy: auto-activating connection 'br-ex-if' (e5a1a2e1-7841-4bc2-871e-a31627de4c3f)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6175] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6177] device (eth0): Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6182] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6195] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6196] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6196] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6198] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6200] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6202] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6203] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6209] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6211] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6213] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6217] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e5a1a2e1-7841-4bc2-871e-a31627de4c3f)
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6218] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6220] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6222] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6224] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6226] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6229] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6231] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6257] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6261] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6266] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6267] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 kernel: vlan23: entered promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6270] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6272] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6275] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6277] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6282] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6303] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6319] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6321] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6323] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6325] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6328] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6332] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6334] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 kernel: vlan22: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  9 09:32:01 compute-2 kernel: vlan20: entered promiscuous mode
Oct  9 09:32:01 compute-2 systemd-udevd[696]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6399] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6422] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6431] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6442] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6490] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6493] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6498] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6504] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 kernel: br-ex: entered promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6512] device (eth1): Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6520] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6536] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6545] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6553] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6560] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6589] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 kernel: vlan21: entered promiscuous mode
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6613] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6628] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6633] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6641] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6648] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6664] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6665] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6670] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6697] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6706] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6723] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6727] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6736] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  9 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6747] manager: startup complete
Oct  9 09:32:01 compute-2 systemd[1]: Finished Network Manager Wait Online.
Oct  9 09:32:01 compute-2 systemd[1]: Starting Cloud-init: Network Stage...
Oct  9 09:32:01 compute-2 systemd[1]: Starting Authorization Manager...
Oct  9 09:32:01 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 09:32:01 compute-2 polkitd[1121]: Started polkitd version 0.117
Oct  9 09:32:01 compute-2 systemd[1]: Started Authorization Manager.
Oct  9 09:32:01 compute-2 cloud-init[1211]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:32:01 +0000. Up 6.41 seconds.
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   br-ex    |  True | 192.168.122.102 | 255.255.255.0 | global | fa:16:3e:27:92:90 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |    eth0    |  True |  192.168.26.193 | 255.255.255.0 | global | fa:16:3e:49:30:79 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:27:92:90 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: | ovs-system | False |        .        |       .       |   .    | 12:64:9c:17:a0:eb |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan20   |  True |   172.17.0.102  | 255.255.255.0 | global | 4a:8d:e2:12:1b:28 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan21   |  True |   172.18.0.102  | 255.255.255.0 | global | c6:a3:7e:f5:98:20 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan22   |  True |   172.19.0.102  | 255.255.255.0 | global | 12:06:a2:8f:c3:3a |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan23   |  True |   172.20.0.102  | 255.255.255.0 | global | 92:d6:e4:56:b5:24 |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   2   |    172.17.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   3   |    172.18.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   4   |    172.19.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   5   |    172.20.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   6   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   7   |  192.168.122.0  |   0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct  9 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  9 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Network Stage.
Oct  9 09:32:02 compute-2 systemd[1]: Reached target Cloud-config availability.
Oct  9 09:32:02 compute-2 systemd[1]: Reached target Network is Online.
Oct  9 09:32:02 compute-2 systemd[1]: Starting Cloud-init: Config Stage...
Oct  9 09:32:02 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Oct  9 09:32:02 compute-2 systemd[1]: Starting Notify NFS peers of a restart...
Oct  9 09:32:02 compute-2 systemd[1]: Starting System Logging Service...
Oct  9 09:32:02 compute-2 sm-notify[1244]: Version 2.5.4 starting
Oct  9 09:32:02 compute-2 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:32:02 compute-2 systemd[1]: Starting Permit User Sessions...
Oct  9 09:32:02 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Oct  9 09:32:02 compute-2 systemd[1]: Started Notify NFS peers of a restart.
Oct  9 09:32:02 compute-2 systemd[1]: Finished Permit User Sessions.
Oct  9 09:32:02 compute-2 systemd[1]: Started Command Scheduler.
Oct  9 09:32:02 compute-2 systemd[1]: Started Getty on tty1.
Oct  9 09:32:02 compute-2 systemd[1]: Started Serial Getty on ttyS0.
Oct  9 09:32:02 compute-2 systemd[1]: Reached target Login Prompts.
Oct  9 09:32:02 compute-2 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:32:02 compute-2 systemd[1]: Started System Logging Service.
Oct  9 09:32:02 compute-2 rsyslogd[1245]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1245" x-info="https://www.rsyslog.com"] start
Oct  9 09:32:02 compute-2 systemd[1]: Reached target Multi-User System.
Oct  9 09:32:02 compute-2 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  9 09:32:02 compute-2 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  9 09:32:02 compute-2 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  9 09:32:02 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:02 compute-2 cloud-init[1257]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:32:02 +0000. Up 6.84 seconds.
Oct  9 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Config Stage.
Oct  9 09:32:02 compute-2 systemd[1]: Starting Cloud-init: Final Stage...
Oct  9 09:32:02 compute-2 cloud-init[1261]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:32:02 +0000. Up 7.14 seconds.
Oct  9 09:32:02 compute-2 cloud-init[1261]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:32:02 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 7.18 seconds
Oct  9 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Final Stage.
Oct  9 09:32:02 compute-2 systemd[1]: Reached target Cloud-init target.
Oct  9 09:32:02 compute-2 systemd[1]: Startup finished in 1.267s (kernel) + 1.958s (initrd) + 4.000s (userspace) = 7.227s.
Oct  9 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 45 affinity: Operation not permitted
Oct  9 09:32:10 compute-2 irqbalance[796]: IRQ 45 affinity is now unmanaged
Oct  9 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 43 affinity: Operation not permitted
Oct  9 09:32:10 compute-2 irqbalance[796]: IRQ 43 affinity is now unmanaged
Oct  9 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 42 affinity: Operation not permitted
Oct  9 09:32:10 compute-2 irqbalance[796]: IRQ 42 affinity is now unmanaged
Oct  9 09:32:11 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:32:31 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 09:32:50 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 09:32:50 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 09:32:50 compute-2 systemd-logind[800]: New session 1 of user zuul.
Oct  9 09:32:51 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 09:32:51 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct  9 09:32:51 compute-2 systemd[1270]: Queued start job for default target Main User Target.
Oct  9 09:32:51 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:32:51 compute-2 systemd[1270]: Created slice User Application Slice.
Oct  9 09:32:51 compute-2 systemd[1270]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:32:51 compute-2 systemd[1270]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:32:51 compute-2 systemd[1270]: Reached target Paths.
Oct  9 09:32:51 compute-2 systemd[1270]: Reached target Timers.
Oct  9 09:32:51 compute-2 systemd[1270]: Starting D-Bus User Message Bus Socket...
Oct  9 09:32:51 compute-2 systemd[1270]: Starting Create User's Volatile Files and Directories...
Oct  9 09:32:51 compute-2 systemd[1270]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:32:51 compute-2 systemd[1270]: Finished Create User's Volatile Files and Directories.
Oct  9 09:32:51 compute-2 systemd[1270]: Reached target Sockets.
Oct  9 09:32:51 compute-2 systemd[1270]: Reached target Basic System.
Oct  9 09:32:51 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct  9 09:32:51 compute-2 systemd[1270]: Reached target Main User Target.
Oct  9 09:32:51 compute-2 systemd[1270]: Startup finished in 87ms.
Oct  9 09:32:51 compute-2 systemd[1]: Started Session 1 of User zuul.
Oct  9 09:32:51 compute-2 python3.9[1495]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:32:51 compute-2 systemd[1]: session-1.scope: Deactivated successfully.
Oct  9 09:32:51 compute-2 systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Oct  9 09:32:51 compute-2 systemd-logind[800]: Removed session 1.
Oct  9 09:32:58 compute-2 systemd-logind[800]: New session 3 of user zuul.
Oct  9 09:32:58 compute-2 systemd[1]: Started Session 3 of User zuul.
Oct  9 09:33:02 compute-2 python3[2261]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:33:04 compute-2 python3[2352]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  9 09:33:05 compute-2 python3[2379]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:33:05 compute-2 python3[2405]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:05 compute-2 kernel: loop: module loaded
Oct  9 09:33:05 compute-2 kernel: loop3: detected capacity change from 0 to 41943040
Oct  9 09:33:06 compute-2 python3[2440]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:33:06 compute-2 lvm[2443]: PV /dev/loop3 not used.
Oct  9 09:33:06 compute-2 lvm[2445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:06 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  9 09:33:06 compute-2 lvm[2455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:06 compute-2 lvm[2455]: VG ceph_vg0 finished
Oct  9 09:33:06 compute-2 lvm[2452]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  9 09:33:06 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  9 09:33:06 compute-2 python3[2533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  9 09:33:06 compute-2 python3[2606]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002386.7986755-33835-120594403229080/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:33:07 compute-2 python3[2656]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:33:07 compute-2 systemd[1]: Reloading.
Oct  9 09:33:07 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:33:07 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:33:07 compute-2 systemd[1]: Starting Ceph OSD losetup...
Oct  9 09:33:07 compute-2 bash[2695]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Oct  9 09:33:07 compute-2 systemd[1]: Finished Ceph OSD losetup.
Oct  9 09:33:07 compute-2 lvm[2696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:33:07 compute-2 lvm[2696]: VG ceph_vg0 finished
Oct  9 09:33:09 compute-2 python3[2720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:34:18 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:34:18 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:34:18 compute-2 systemd-logind[800]: New session 4 of user ceph-admin.
Oct  9 09:34:18 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:34:18 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:34:18 compute-2 systemd[2768]: Queued start job for default target Main User Target.
Oct  9 09:34:18 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:34:18 compute-2 systemd[2768]: Created slice User Application Slice.
Oct  9 09:34:18 compute-2 systemd[2768]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:34:18 compute-2 systemd[2768]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:34:18 compute-2 systemd[2768]: Reached target Paths.
Oct  9 09:34:18 compute-2 systemd[2768]: Reached target Timers.
Oct  9 09:34:18 compute-2 systemd[2768]: Starting D-Bus User Message Bus Socket...
Oct  9 09:34:18 compute-2 systemd[2768]: Starting Create User's Volatile Files and Directories...
Oct  9 09:34:18 compute-2 systemd[2768]: Finished Create User's Volatile Files and Directories.
Oct  9 09:34:18 compute-2 systemd[2768]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:34:18 compute-2 systemd[2768]: Reached target Sockets.
Oct  9 09:34:18 compute-2 systemd[2768]: Reached target Basic System.
Oct  9 09:34:18 compute-2 systemd[2768]: Reached target Main User Target.
Oct  9 09:34:18 compute-2 systemd[2768]: Startup finished in 84ms.
Oct  9 09:34:18 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:34:18 compute-2 systemd[1]: Started Session 4 of User ceph-admin.
Oct  9 09:34:18 compute-2 systemd-logind[800]: New session 6 of user ceph-admin.
Oct  9 09:34:18 compute-2 systemd[1]: Started Session 6 of User ceph-admin.
Oct  9 09:34:18 compute-2 systemd-logind[800]: New session 7 of user ceph-admin.
Oct  9 09:34:18 compute-2 systemd[1]: Started Session 7 of User ceph-admin.
Oct  9 09:34:19 compute-2 systemd-logind[800]: New session 8 of user ceph-admin.
Oct  9 09:34:19 compute-2 systemd[1]: Started Session 8 of User ceph-admin.
Oct  9 09:34:19 compute-2 systemd-logind[800]: New session 9 of user ceph-admin.
Oct  9 09:34:19 compute-2 systemd[1]: Started Session 9 of User ceph-admin.
Oct  9 09:34:19 compute-2 systemd-logind[800]: New session 10 of user ceph-admin.
Oct  9 09:34:19 compute-2 systemd[1]: Started Session 10 of User ceph-admin.
Oct  9 09:34:19 compute-2 systemd-logind[800]: New session 11 of user ceph-admin.
Oct  9 09:34:19 compute-2 systemd[1]: Started Session 11 of User ceph-admin.
Oct  9 09:34:20 compute-2 systemd-logind[800]: New session 12 of user ceph-admin.
Oct  9 09:34:20 compute-2 systemd[1]: Started Session 12 of User ceph-admin.
Oct  9 09:34:20 compute-2 systemd-logind[800]: New session 13 of user ceph-admin.
Oct  9 09:34:20 compute-2 systemd[1]: Started Session 13 of User ceph-admin.
Oct  9 09:34:20 compute-2 irqbalance[796]: Cannot change IRQ 44 affinity: Operation not permitted
Oct  9 09:34:20 compute-2 irqbalance[796]: IRQ 44 affinity is now unmanaged
Oct  9 09:34:20 compute-2 systemd-logind[800]: New session 14 of user ceph-admin.
Oct  9 09:34:20 compute-2 systemd[1]: Started Session 14 of User ceph-admin.
Oct  9 09:34:21 compute-2 chronyd[807]: Selected source 69.176.84.79 (pool.ntp.org)
Oct  9 09:34:21 compute-2 systemd-logind[800]: New session 15 of user ceph-admin.
Oct  9 09:34:21 compute-2 systemd[1]: Started Session 15 of User ceph-admin.
Oct  9 09:34:21 compute-2 systemd-logind[800]: New session 16 of user ceph-admin.
Oct  9 09:34:21 compute-2 systemd[1]: Started Session 16 of User ceph-admin.
Oct  9 09:34:22 compute-2 kernel: evm: overlay not supported
Oct  9 09:34:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck4032882647-merged.mount: Deactivated successfully.
Oct  9 09:34:22 compute-2 podman[3101]: 2025-10-09 09:34:22.172578442 +0000 UTC m=+0.064718431 system refresh
Oct  9 09:34:23 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:50 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:50 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:51 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 3356 (sysctl)
Oct  9 09:34:51 compute-2 systemd[1270]: Starting Mark boot as successful...
Oct  9 09:34:51 compute-2 systemd[1270]: Finished Mark boot as successful.
Oct  9 09:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:51 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  9 09:34:51 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  9 09:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:34:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat4151351202-merged.mount: Deactivated successfully.
Oct  9 09:34:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat4151351202-lower\x2dmapped.mount: Deactivated successfully.
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.596118296 +0000 UTC m=+17.404629021 container create 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.584824508 +0000 UTC m=+17.393335233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:09 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  9 09:35:09 compute-2 systemd[1]: Started libpod-conmon-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope.
Oct  9 09:35:09 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.688763849 +0000 UTC m=+17.497274584 container init 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.695434399 +0000 UTC m=+17.503945124 container start 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.696722647 +0000 UTC m=+17.505233373 container attach 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:09 compute-2 peaceful_albattani[3574]: 167 167
Oct  9 09:35:09 compute-2 systemd[1]: libpod-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope: Deactivated successfully.
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.70368643 +0000 UTC m=+17.512197155 container died 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-828d4ed0ac1cef143319c9e51d23d12d616c08658b69ae332cab3d6c03a625fe-merged.mount: Deactivated successfully.
Oct  9 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.724658569 +0000 UTC m=+17.533169285 container remove 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:09 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:09 compute-2 systemd[1]: libpod-conmon-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope: Deactivated successfully.
Oct  9 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.848939029 +0000 UTC m=+0.033119618 container create 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:09 compute-2 systemd[1]: Started libpod-conmon-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope.
Oct  9 09:35:09 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea6bfb2fe90351834e926e1696d78d24e9533a865a013c8a5ebdb45f047e797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea6bfb2fe90351834e926e1696d78d24e9533a865a013c8a5ebdb45f047e797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.911511851 +0000 UTC m=+0.095692459 container init 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.925153307 +0000 UTC m=+0.109333905 container start 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.926296502 +0000 UTC m=+0.110477100 container attach 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.836175471 +0000 UTC m=+0.020356089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:10 compute-2 brave_sutherland[3610]: [
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:    {
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "available": false,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "being_replaced": false,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "ceph_device_lvm": false,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "lsm_data": {},
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "lvs": [],
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "path": "/dev/sr0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "rejected_reasons": [
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "Has a FileSystem",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "Insufficient space (<5GB)"
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        ],
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        "sys_api": {
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "actuators": null,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "device_nodes": [
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:                "sr0"
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            ],
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "devname": "sr0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "human_readable_size": "474.00 KB",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "id_bus": "ata",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "model": "QEMU DVD-ROM",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "nr_requests": "64",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "parent": "/dev/sr0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "partitions": {},
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "path": "/dev/sr0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "removable": "1",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "rev": "2.5+",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "ro": "0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "rotational": "0",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "sas_address": "",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "sas_device_handle": "",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "scheduler_mode": "mq-deadline",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "sectors": 0,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "sectorsize": "2048",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "size": 485376.0,
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "support_discard": "2048",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "type": "disk",
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:            "vendor": "QEMU"
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:        }
Oct  9 09:35:10 compute-2 brave_sutherland[3610]:    }
Oct  9 09:35:10 compute-2 brave_sutherland[3610]: ]
Oct  9 09:35:10 compute-2 systemd[1]: libpod-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope: Deactivated successfully.
Oct  9 09:35:10 compute-2 podman[4674]: 2025-10-09 09:35:10.580412535 +0000 UTC m=+0.021378447 container died 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:10 compute-2 podman[4674]: 2025-10-09 09:35:10.602488236 +0000 UTC m=+0.043454127 container remove 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:10 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:10 compute-2 systemd[1]: libpod-conmon-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope: Deactivated successfully.
Oct  9 09:35:12 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:12 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.741459505 +0000 UTC m=+0.031868920 container create 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:35:12 compute-2 systemd[1]: Started libpod-conmon-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope.
Oct  9 09:35:12 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.790188849 +0000 UTC m=+0.080598285 container init 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.79447859 +0000 UTC m=+0.084888005 container start 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.795769263 +0000 UTC m=+0.086178678 container attach 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:12 compute-2 trusting_turing[5676]: 167 167
Oct  9 09:35:12 compute-2 systemd[1]: libpod-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope: Deactivated successfully.
Oct  9 09:35:12 compute-2 conmon[5676]: conmon 70ccd8a4cace9b58f990 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope/container/memory.events
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.798794669 +0000 UTC m=+0.089204224 container died 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.816028018 +0000 UTC m=+0.106437433 container remove 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.729357082 +0000 UTC m=+0.019766517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:12 compute-2 systemd[1]: libpod-conmon-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope: Deactivated successfully.
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.869313715 +0000 UTC m=+0.033368327 container create 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Oct  9 09:35:12 compute-2 systemd[1]: Started libpod-conmon-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope.
Oct  9 09:35:12 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.918360629 +0000 UTC m=+0.082415241 container init 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.923708554 +0000 UTC m=+0.087763156 container start 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.925333948 +0000 UTC m=+0.089388571 container attach 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.85517923 +0000 UTC m=+0.019233852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:12 compute-2 systemd[1]: libpod-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope: Deactivated successfully.
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.972433871 +0000 UTC m=+0.136488473 container died 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.992235053 +0000 UTC m=+0.156289655 container remove 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:35:13 compute-2 systemd[1]: libpod-conmon-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope: Deactivated successfully.
Oct  9 09:35:13 compute-2 systemd[1]: Reloading.
Oct  9 09:35:13 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:13 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:13 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:13 compute-2 systemd[1]: Reloading.
Oct  9 09:35:13 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:13 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:13 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Oct  9 09:35:13 compute-2 systemd[1]: Reloading.
Oct  9 09:35:13 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:13 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:13 compute-2 systemd[1]: Reached target Ceph cluster 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:13 compute-2 systemd[1]: Reloading.
Oct  9 09:35:13 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:13 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:13 compute-2 systemd[1]: Reloading.
Oct  9 09:35:13 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:13 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:14 compute-2 systemd[1]: Created slice Slice /system/ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:14 compute-2 systemd[1]: Reached target System Time Set.
Oct  9 09:35:14 compute-2 systemd[1]: Reached target System Time Synchronized.
Oct  9 09:35:14 compute-2 systemd[1]: Starting Ceph mon.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:14 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:14 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  9 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.271482158 +0000 UTC m=+0.033302133 container create 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.315913197 +0000 UTC m=+0.077733191 container init 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.32077221 +0000 UTC m=+0.082592184 container start 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:14 compute-2 bash[5967]: 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7
Oct  9 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.257754341 +0000 UTC m=+0.019574335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:14 compute-2 systemd[1]: Started Ceph mon.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:14 compute-2 ceph-mon[5983]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:14 compute-2 ceph-mon[5983]: load: jerasure load: lrc 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Git sha 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB SUMMARY
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB Session ID:  IGXT8FL5CO7VG5U36Z5B
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                     Options.env: 0x56479294bc20
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                Options.info_log: 0x5647939cda20
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                 Options.wal_dir: 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.write_buffer_manager: 0x5647939d1900
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.row_cache: None
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.wal_filter: None
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.wal_compression: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_background_jobs: 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_total_wal_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.compaction_readahead_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Compression algorithms supported:
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kZSTD supported: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.merge_operator: 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.compaction_filter: None
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5647939cc5c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5647939f1350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.write_buffer_size: 33554432
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.max_write_buffer_number: 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.compression: NoCompression
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.num_levels: 7
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f5b1458-47b7-4c0b-a668-6fbde19939d2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514363222, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514364133, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514364210, "job": 1, "event": "recovery_finished"}
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5647939f2e00
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB pointer 0x564793afc000
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:35:14 compute-2 ceph-mon[5983]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Oct  9 09:35:14 compute-2 ceph-mon[5983]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(???) e0 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).mds e1 new map
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).mds e1 print_map#012e1#012btime 2025-10-09T09:33:39:705322+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 3314933000852226048, adjusting msgr requires
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to  5248M
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct  9 09:35:14 compute-2 ceph-mon[5983]: OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service ingress.rgw.default spec with placement count:2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service node-exporter spec with placement *
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service grafana spec with placement compute-0;count:1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service prometheus spec with placement compute-0;count:1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Saving service alertmanager spec with placement compute-0;count:1
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Deploying daemon mon.compute-2 on compute-2
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct  9 09:35:14 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct  9 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct  9 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct  9 09:35:15 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:15 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:16 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Oct  9 09:35:16 compute-2 ceph-mon[5983]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct  9 09:35:16 compute-2 ceph-mon[5983]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  9 09:35:16 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:17 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,os=Linux}
Oct  9 09:35:19 compute-2 ceph-mon[5983]: Deploying daemon mon.compute-1 on compute-1
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-0 calling monitor election
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2 calling monitor election
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct  9 09:35:19 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct  9 09:35:19 compute-2 ceph-mon[5983]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct  9 09:35:19 compute-2 ceph-mon[5983]: paxos.1).electionLogic(10) init, last seen epoch 10
Oct  9 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.838960145 +0000 UTC m=+0.030979814 container create 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:19 compute-2 systemd[1]: Started libpod-conmon-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope.
Oct  9 09:35:19 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.911232236 +0000 UTC m=+0.103251905 container init 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.916137206 +0000 UTC m=+0.108156876 container start 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.917768943 +0000 UTC m=+0.109788622 container attach 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:19 compute-2 focused_pasteur[6117]: 167 167
Oct  9 09:35:19 compute-2 systemd[1]: libpod-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope: Deactivated successfully.
Oct  9 09:35:19 compute-2 conmon[6117]: conmon 531df004245249dfb83f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope/container/memory.events
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.920685494 +0000 UTC m=+0.112705163 container died 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.82480451 +0000 UTC m=+0.016824190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-2dba510c6dc012c6aa3a67551d5b20cc17ca4d9834dbd897e3b0182da2987968-merged.mount: Deactivated successfully.
Oct  9 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.939364349 +0000 UTC m=+0.131384018 container remove 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:35:19 compute-2 systemd[1]: libpod-conmon-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope: Deactivated successfully.
Oct  9 09:35:19 compute-2 systemd[1]: Reloading.
Oct  9 09:35:20 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:20 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:20 compute-2 systemd[1]: Reloading.
Oct  9 09:35:20 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:20 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:20 compute-2 systemd[1]: Starting Ceph mgr.compute-2.takdnm for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.598001558 +0000 UTC m=+0.027733892 container create ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/lib/ceph/mgr/ceph-compute-2.takdnm supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.649195159 +0000 UTC m=+0.078927503 container init ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  9 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.654354699 +0000 UTC m=+0.084087023 container start ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  9 09:35:20 compute-2 bash[6248]: ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76
Oct  9 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.586507161 +0000 UTC m=+0.016239505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:20 compute-2 systemd[1]: Started Ceph mgr.compute-2.takdnm for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:20 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:20 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:21 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:22 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:23 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:23 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-0 calling monitor election
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2 calling monitor election
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-1 calling monitor election
Oct  9 09:35:24 compute-2 ceph-mon[5983]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct  9 09:35:24 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  9 09:35:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.199+0000 7f55208db140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.281+0000 7f55208db140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.976+0000 7f55208db140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 ceph-mon[5983]: Deploying daemon mgr.compute-1.etokpp on compute-1
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  9 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.32894793 +0000 UTC m=+0.028241208 container create 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:26 compute-2 systemd[1]: Started libpod-conmon-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope.
Oct  9 09:35:26 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.376003899 +0000 UTC m=+0.075297177 container init 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.380888329 +0000 UTC m=+0.080181597 container start 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.381927504 +0000 UTC m=+0.081220771 container attach 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct  9 09:35:26 compute-2 elegant_noether[6394]: 167 167
Oct  9 09:35:26 compute-2 systemd[1]: libpod-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope: Deactivated successfully.
Oct  9 09:35:26 compute-2 conmon[6394]: conmon 2335c0a72bbe46f7274d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope/container/memory.events
Oct  9 09:35:26 compute-2 podman[6399]: 2025-10-09 09:35:26.414060193 +0000 UTC m=+0.016239198 container died 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  9 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.318517349 +0000 UTC m=+0.017810638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:26 compute-2 systemd[1]: var-lib-containers-storage-overlay-b64f79332c2fcbff3d3fb3e32b69334d476e4ec5ef444912c1e82aa42f0651cf-merged.mount: Deactivated successfully.
Oct  9 09:35:26 compute-2 podman[6399]: 2025-10-09 09:35:26.435895574 +0000 UTC m=+0.038074569 container remove 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:26 compute-2 systemd[1]: libpod-conmon-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope: Deactivated successfully.
Oct  9 09:35:26 compute-2 systemd[1]: Reloading.
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:26 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.539+0000 7f55208db140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.687+0000 7f55208db140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 systemd[1]: Reloading.
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.753+0000 7f55208db140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:26 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:26 compute-2 python3[6473]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.879+0000 7f55208db140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:26 compute-2 systemd[1]: Starting Ceph crash.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.081760598 +0000 UTC m=+0.030359621 container create fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.127669048 +0000 UTC m=+0.076268081 container init fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.131851372 +0000 UTC m=+0.080450395 container start fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:35:27 compute-2 bash[6561]: fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4
Oct  9 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.069768106 +0000 UTC m=+0.018367149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:27 compute-2 systemd[1]: Started Ceph crash.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.266+0000 7f2f00f97640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.266+0000 7f2f00f97640 -1 AuthRegistry(0x7f2efc0696b0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.267+0000 7f2f00f97640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.267+0000 7f2f00f97640 -1 AuthRegistry(0x7f2f00f95ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.268+0000 7f2efad76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2ef9d74640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2efa575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2f00f97640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:27 compute-2 ceph-mon[5983]: Deploying daemon crash.compute-2 on compute-2
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.600102076 +0000 UTC m=+0.032890933 container create 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:27 compute-2 systemd[1]: Started libpod-conmon-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope.
Oct  9 09:35:27 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.653473068 +0000 UTC m=+0.086261926 container init 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.659209508 +0000 UTC m=+0.091998366 container start 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.660505687 +0000 UTC m=+0.093294546 container attach 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct  9 09:35:27 compute-2 gifted_jemison[6685]: 167 167
Oct  9 09:35:27 compute-2 systemd[1]: libpod-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope: Deactivated successfully.
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.664558446 +0000 UTC m=+0.097347304 container died 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-71f4207de405471f2d8e69bf33bc425fad32b2c3ba5eeaf3feb31a1167716983-merged.mount: Deactivated successfully.
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.585714708 +0000 UTC m=+0.018503576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.686993179 +0000 UTC m=+0.119782028 container remove 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:27 compute-2 systemd[1]: libpod-conmon-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope: Deactivated successfully.
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:27.793+0000 7f55208db140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.807404055 +0000 UTC m=+0.028804863 container create b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:35:27 compute-2 systemd[1]: Started libpod-conmon-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope.
Oct  9 09:35:27 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.862179152 +0000 UTC m=+0.083579969 container init b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.867559959 +0000 UTC m=+0.088960766 container start b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  9 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.86872515 +0000 UTC m=+0.090125958 container attach b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  9 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.796266188 +0000 UTC m=+0.017667015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.005+0000 7f55208db140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.080+0000 7f55208db140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.143+0000 7f55208db140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.216+0000 7f55208db140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0493bfe4-e28c-49f6-8185-a07f1e80a32f
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.282+0000 7f55208db140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct  9 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct  9 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct  9 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.604+0000 7f55208db140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 lvm[6780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:35:28 compute-2 lvm[6780]: VG ceph_vg0 finished
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct  9 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.694+0000 7f55208db140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct  9 09:35:28 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/954261656' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: stderr: got monmap epoch 3
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: --> Creating keyring file for osd.2
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  9 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 0493bfe4-e28c-49f6-8185-a07f1e80a32f --setuser ceph --setgroup ceph
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.095+0000 7f55208db140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 _set_new_cache_sizes cache_size:1019927211 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.611+0000 7f55208db140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 systemd[1]: session-6.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd[1]: session-4.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 6.
Oct  9 09:35:29 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 4.
Oct  9 09:35:29 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 9.
Oct  9 09:35:29 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 13.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 10.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 12.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 7.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 8.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 11.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 15.
Oct  9 09:35:29 compute-2 systemd-logind[800]: Removed session 14.
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.696+0000 7f55208db140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.771+0000 7f55208db140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.907+0000 7f55208db140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.981+0000 7f55208db140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.131+0000 7f55208db140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.338+0000 7f55208db140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:30 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.575+0000 7f55208db140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.642+0000 7f55208db140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x55a9027c4d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.825+0000 7ff291100140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.902+0000 7ff291100140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct  9 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:31.613+0000 7ff291100140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:31 compute-2 heuristic_tharp[6719]: stderr: 2025-10-09T09:35:29.018+0000 7f9a8e6a0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Oct  9 09:35:31 compute-2 heuristic_tharp[6719]: stderr: 2025-10-09T09:35:29.289+0000 7f9a8e6a0740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  9 09:35:31 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  9 09:35:31 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:35:31 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  9 09:35:32 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  9 09:35:32 compute-2 systemd[1]: libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Deactivated successfully.
Oct  9 09:35:32 compute-2 systemd[1]: libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Consumed 1.538s CPU time.
Oct  9 09:35:32 compute-2 conmon[6719]: conmon b809419563b3cc170f96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope/container/memory.events
Oct  9 09:35:32 compute-2 podman[6706]: 2025-10-09 09:35:32.177346398 +0000 UTC m=+4.398747206 container died b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.181+0000 7ff291100140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011-merged.mount: Deactivated successfully.
Oct  9 09:35:32 compute-2 podman[6706]: 2025-10-09 09:35:32.210080404 +0000 UTC m=+4.431481212 container remove b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:32 compute-2 systemd[1]: libpod-conmon-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Deactivated successfully.
Oct  9 09:35:32 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Oct  9 09:35:32 compute-2 systemd[1]: session-16.scope: Consumed 42.981s CPU time.
Oct  9 09:35:32 compute-2 systemd-logind[800]: Removed session 16.
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.338+0000 7ff291100140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.402+0000 7ff291100140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.526+0000 7ff291100140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.403+0000 7ff291100140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.599+0000 7ff291100140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.667+0000 7ff291100140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.726+0000 7ff291100140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.795+0000 7ff291100140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.857+0000 7ff291100140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.161+0000 7ff291100140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.246+0000 7ff291100140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct  9 09:35:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 _set_new_cache_sizes cache_size:1020053014 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.628+0000 7ff291100140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct  9 09:35:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct  9 09:35:35 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:35 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:35 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.123+0000 7ff291100140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.185+0000 7ff291100140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.254+0000 7ff291100140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.381+0000 7ff291100140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.442+0000 7ff291100140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:35 compute-2 systemd-logind[800]: New session 17 of user ceph-admin.
Oct  9 09:35:35 compute-2 systemd[1]: Started Session 17 of User ceph-admin.
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.579+0000 7ff291100140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.768+0000 7ff291100140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:36.001+0000 7ff291100140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-2 podman[7834]: 2025-10-09 09:35:36.025621579 +0000 UTC m=+0.043135432 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:36.064+0000 7ff291100140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x556e0c442d00 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct  9 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:36 compute-2 podman[7834]: 2025-10-09 09:35:36.117256197 +0000 UTC m=+0.134770050 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct  9 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:38 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 _set_new_cache_sizes cache_size:1020054705 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:39 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct  9 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct  9 09:35:40 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Oct  9 09:35:40 compute-2 systemd[1]: session-17.scope: Consumed 3.381s CPU time.
Oct  9 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct  9 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:40 compute-2 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Oct  9 09:35:40 compute-2 systemd-logind[800]: Removed session 17.
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.193+0000 7fce0fa2d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.276+0000 7fce0fa2d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:40 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-0 on compute-0
Oct  9 09:35:40 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct  9 09:35:40 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.989+0000 7fce0fa2d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:41 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.557+0000 7fce0fa2d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.718+0000 7fce0fa2d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.786+0000 7fce0fa2d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.909+0000 7fce0fa2d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:42.769+0000 7fce0fa2d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:42.964+0000 7fce0fa2d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.030+0000 7fce0fa2d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.088+0000 7fce0fa2d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.157+0000 7fce0fa2d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.220+0000 7fce0fa2d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.518+0000 7fce0fa2d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.603+0000 7fce0fa2d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.979+0000 7fce0fa2d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct  9 09:35:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.464+0000 7fce0fa2d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.526+0000 7fce0fa2d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.596+0000 7fce0fa2d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.725+0000 7fce0fa2d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.787+0000 7fce0fa2d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.929+0000 7fce0fa2d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.126+0000 7fce0fa2d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.372+0000 7fce0fa2d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.439+0000 7fce0fa2d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x55674ea03860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  exe_path /proc/self/exe
Oct  9 09:35:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.624+0000 7fcbf314c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct  9 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.698+0000 7fcbf314c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct  9 09:35:46 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:46 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:46.390+0000 7fcbf314c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:35:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:46.942+0000 7fcbf314c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:  from numpy import show_config as show_numpy_config
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.085+0000 7fcbf314c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.154+0000 7fcbf314c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct  9 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.273+0000 7fcbf314c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct  9 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.143+0000 7fcbf314c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.332+0000 7fcbf314c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.405+0000 7fcbf314c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.465+0000 7fcbf314c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.533+0000 7fcbf314c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.596+0000 7fcbf314c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.903+0000 7fcbf314c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.989+0000 7fcbf314c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct  9 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.374+0000 7fcbf314c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct  9 09:35:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.874+0000 7fcbf314c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct  9 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.938+0000 7fcbf314c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.009+0000 7fcbf314c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.141+0000 7fcbf314c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct  9 09:35:50 compute-2 systemd[1]: Stopping User Manager for UID 42477...
Oct  9 09:35:50 compute-2 systemd[2768]: Activating special unit Exit the Session...
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped target Main User Target.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped target Basic System.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped target Paths.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped target Sockets.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped target Timers.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:35:50 compute-2 systemd[2768]: Closed D-Bus User Message Bus Socket.
Oct  9 09:35:50 compute-2 systemd[2768]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:35:50 compute-2 systemd[2768]: Removed slice User Application Slice.
Oct  9 09:35:50 compute-2 systemd[2768]: Reached target Shutdown.
Oct  9 09:35:50 compute-2 systemd[2768]: Finished Exit the Session.
Oct  9 09:35:50 compute-2 systemd[2768]: Reached target Exit the Session.
Oct  9 09:35:50 compute-2 systemd[1]: user@42477.service: Deactivated successfully.
Oct  9 09:35:50 compute-2 systemd[1]: Stopped User Manager for UID 42477.
Oct  9 09:35:50 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  9 09:35:50 compute-2 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  9 09:35:50 compute-2 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  9 09:35:50 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  9 09:35:50 compute-2 systemd[1]: Removed slice User Slice of UID 42477.
Oct  9 09:35:50 compute-2 systemd[1]: user-42477.slice: Consumed 47.105s CPU time.
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.209+0000 7fcbf314c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.347+0000 7fcbf314c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.543+0000 7fcbf314c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct  9 09:35:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.779+0000 7fcbf314c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct  9 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.843+0000 7fcbf314c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x56150f233860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct  9 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct  9 09:35:51 compute-2 systemd-logind[800]: New session 18 of user ceph-admin.
Oct  9 09:35:51 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct  9 09:35:51 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  9 09:35:51 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  9 09:35:51 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct  9 09:35:51 compute-2 systemd[9018]: Queued start job for default target Main User Target.
Oct  9 09:35:51 compute-2 systemd[9018]: Created slice User Application Slice.
Oct  9 09:35:51 compute-2 systemd[9018]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:35:51 compute-2 systemd[9018]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:35:51 compute-2 systemd[9018]: Reached target Paths.
Oct  9 09:35:51 compute-2 systemd[9018]: Reached target Timers.
Oct  9 09:35:51 compute-2 systemd[9018]: Starting D-Bus User Message Bus Socket...
Oct  9 09:35:51 compute-2 systemd[9018]: Starting Create User's Volatile Files and Directories...
Oct  9 09:35:51 compute-2 systemd[9018]: Finished Create User's Volatile Files and Directories.
Oct  9 09:35:51 compute-2 systemd[9018]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:35:51 compute-2 systemd[9018]: Reached target Sockets.
Oct  9 09:35:51 compute-2 systemd[9018]: Reached target Basic System.
Oct  9 09:35:51 compute-2 systemd[9018]: Reached target Main User Target.
Oct  9 09:35:51 compute-2 systemd[9018]: Startup finished in 97ms.
Oct  9 09:35:51 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct  9 09:35:51 compute-2 systemd[1]: Started Session 18 of User ceph-admin.
Oct  9 09:35:51 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:35:51 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:35:51 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e2 new map
Oct  9 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e2 print_map#012e2#012btime 2025-10-09T09:35:51:790448+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Oct  9 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct  9 09:35:51 compute-2 podman[9140]: 2025-10-09 09:35:51.85947316 +0000 UTC m=+0.045607714 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  9 09:35:51 compute-2 podman[9140]: 2025-10-09 09:35:51.933680256 +0000 UTC m=+0.119814810 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  9 09:35:52 compute-2 ceph-mon[5983]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  9 09:35:52 compute-2 ceph-mon[5983]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  9 09:35:52 compute-2 ceph-mon[5983]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:52 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct  9 09:35:52 compute-2 ceph-mon[5983]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct  9 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:53 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:35:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct  9 09:35:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct  9 09:35:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct  9 09:35:56 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-1 on compute-1
Oct  9 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct  9 09:35:56 compute-2 ceph-mon[5983]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-2 ceph-mon[5983]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct  9 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:56 compute-2 ceph-mon[5983]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  9 09:35:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  9 09:35:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  9 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:35:57 compute-2 systemd[1]: Reloading.
Oct  9 09:35:57 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:57 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:57 compute-2 systemd[1]: Reloading.
Oct  9 09:35:57 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:35:57 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:35:57 compute-2 systemd[1]: Starting Ceph node-exporter.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:35:57 compute-2 bash[10433]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct  9 09:35:58 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-2 on compute-2
Oct  9 09:35:58 compute-2 ceph-mon[5983]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  9 09:35:58 compute-2 bash[10433]: Getting image source signatures
Oct  9 09:35:58 compute-2 bash[10433]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct  9 09:35:58 compute-2 bash[10433]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct  9 09:35:58 compute-2 bash[10433]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct  9 09:35:59 compute-2 bash[10433]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct  9 09:35:59 compute-2 bash[10433]: Writing manifest to image destination
Oct  9 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.206069198 +0000 UTC m=+1.254659036 container create 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b844b5135e8869c1009578f3a25cf260daf648a3a2f08915c093974a5f8216f1/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.244817559 +0000 UTC m=+1.293407387 container init 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.248810805 +0000 UTC m=+1.297400632 container start 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:35:59 compute-2 bash[10433]: 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648
Oct  9 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.196141478 +0000 UTC m=+1.244731316 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.253Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.253Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct  9 09:35:59 compute-2 systemd[1]: Started Ceph node-exporter.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.254Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.254Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=arp
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=bcache
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=bonding
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=btrfs
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=conntrack
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=cpu
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=diskstats
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=dmi
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=edac
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=entropy
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=filefd
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=filesystem
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=hwmon
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=infiniband
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=ipvs
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=loadavg
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=mdadm
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=meminfo
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netclass
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netdev
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netstat
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nfs
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nfsd
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nvme
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=os
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=pressure
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=rapl
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=schedstat
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=selinux
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=sockstat
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=softnet
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=stat
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=tapestats
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=textfile
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=time
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=uname
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=vmstat
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=xfs
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=zfs
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.257Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct  9 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.257Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct  9 09:35:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.6910791 +0000 UTC m=+0.028176124 container create fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 09:35:59 compute-2 systemd[1]: Started libpod-conmon-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope.
Oct  9 09:35:59 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.742712569 +0000 UTC m=+0.079809614 container init fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.747490117 +0000 UTC m=+0.084587143 container start fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.7490579 +0000 UTC m=+0.086154925 container attach fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:35:59 compute-2 jovial_shaw[10600]: 167 167
Oct  9 09:35:59 compute-2 systemd[1]: libpod-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope: Deactivated successfully.
Oct  9 09:35:59 compute-2 conmon[10600]: conmon fc6c223872c1fdb78f2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope/container/memory.events
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.751681077 +0000 UTC m=+0.088778102 container died fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:35:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-f6328fab4b5173bb8ad9716f9016672e1827366a6e090d85092a5d346045728e-merged.mount: Deactivated successfully.
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.772590719 +0000 UTC m=+0.109687744 container remove fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct  9 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.679125973 +0000 UTC m=+0.016223017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:35:59 compute-2 systemd[1]: libpod-conmon-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope: Deactivated successfully.
Oct  9 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.894607298 +0000 UTC m=+0.036719638 container create b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:35:59 compute-2 systemd[1]: Started libpod-conmon-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope.
Oct  9 09:35:59 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.958506246 +0000 UTC m=+0.100618596 container init b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.964216606 +0000 UTC m=+0.106328936 container start b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.965544856 +0000 UTC m=+0.107657196 container attach b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.877675122 +0000 UTC m=+0.019787482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:36:00 compute-2 elegant_merkle[10635]: --> passed data devices: 0 physical, 1 LVM
Oct  9 09:36:00 compute-2 elegant_merkle[10635]: --> All data devices are unavailable
Oct  9 09:36:00 compute-2 systemd[1]: libpod-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope: Deactivated successfully.
Oct  9 09:36:00 compute-2 podman[10622]: 2025-10-09 09:36:00.235925543 +0000 UTC m=+0.378037914 container died b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749-merged.mount: Deactivated successfully.
Oct  9 09:36:00 compute-2 podman[10622]: 2025-10-09 09:36:00.259109573 +0000 UTC m=+0.401221913 container remove b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct  9 09:36:00 compute-2 systemd[1]: libpod-conmon-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope: Deactivated successfully.
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.663310244 +0000 UTC m=+0.027879432 container create 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:36:00 compute-2 systemd[1]: Started libpod-conmon-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope.
Oct  9 09:36:00 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.712665969 +0000 UTC m=+0.077235178 container init 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.71695876 +0000 UTC m=+0.081527950 container start 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.718007062 +0000 UTC m=+0.082576251 container attach 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:00 compute-2 sweet_cerf[10753]: 167 167
Oct  9 09:36:00 compute-2 systemd[1]: libpod-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope: Deactivated successfully.
Oct  9 09:36:00 compute-2 conmon[10753]: conmon 763336aac84886e66461 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope/container/memory.events
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.720252815 +0000 UTC m=+0.084822004 container died 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct  9 09:36:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-57cc32277620f72f082f74c41081d9790162e99d74a05a15d357d814de0465d5-merged.mount: Deactivated successfully.
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.738795054 +0000 UTC m=+0.103364243 container remove 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct  9 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.65197325 +0000 UTC m=+0.016542460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:00 compute-2 systemd[1]: libpod-conmon-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope: Deactivated successfully.
Oct  9 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.853648027 +0000 UTC m=+0.028610936 container create 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:00 compute-2 systemd[1]: Started libpod-conmon-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope.
Oct  9 09:36:00 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.915975675 +0000 UTC m=+0.090938604 container init 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.920870043 +0000 UTC m=+0.095832962 container start 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.922040635 +0000 UTC m=+0.097003544 container attach 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.841188322 +0000 UTC m=+0.016151261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]: {
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:    "2": [
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:        {
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "devices": [
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "/dev/loop3"
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            ],
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "lv_name": "ceph_lv0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "lv_size": "21470642176",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0493bfe4-e28c-49f6-8185-a07f1e80a32f,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "lv_uuid": "Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "name": "ceph_lv0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "tags": {
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.block_uuid": "Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.cephx_lockbox_secret": "",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.cluster_name": "ceph",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.crush_device_class": "",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.encrypted": "0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.osd_fsid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.osd_id": "2",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.type": "block",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.vdo": "0",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:                "ceph.with_tpm": "0"
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            },
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "type": "block",
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:            "vg_name": "ceph_vg0"
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:        }
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]:    ]
Oct  9 09:36:01 compute-2 beautiful_leakey[10789]: }
Oct  9 09:36:01 compute-2 systemd[1]: libpod-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope: Deactivated successfully.
Oct  9 09:36:01 compute-2 podman[10776]: 2025-10-09 09:36:01.149770767 +0000 UTC m=+0.324733696 container died 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:36:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666-merged.mount: Deactivated successfully.
Oct  9 09:36:01 compute-2 podman[10776]: 2025-10-09 09:36:01.169090936 +0000 UTC m=+0.344053845 container remove 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:01 compute-2 systemd[1]: libpod-conmon-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope: Deactivated successfully.
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.586620263 +0000 UTC m=+0.028552655 container create c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:01 compute-2 systemd[1]: Started libpod-conmon-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope.
Oct  9 09:36:01 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.647722692 +0000 UTC m=+0.089655094 container init c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.652546449 +0000 UTC m=+0.094478831 container start c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.654158589 +0000 UTC m=+0.096090971 container attach c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct  9 09:36:01 compute-2 quirky_wright[10905]: 167 167
Oct  9 09:36:01 compute-2 systemd[1]: libpod-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope: Deactivated successfully.
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.65646553 +0000 UTC m=+0.098397912 container died c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.575438506 +0000 UTC m=+0.017370909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-88655573ef8ff3ee20a22371d1138ecdbfc4f2cbf5e07069c0345870becff033-merged.mount: Deactivated successfully.
Oct  9 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.680315516 +0000 UTC m=+0.122247898 container remove c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct  9 09:36:01 compute-2 systemd[1]: libpod-conmon-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope: Deactivated successfully.
Oct  9 09:36:01 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:01 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  9 09:36:01 compute-2 ceph-mon[5983]: Deploying daemon osd.2 on compute-2
Oct  9 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.858882 +0000 UTC m=+0.028326569 container create 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:01 compute-2 systemd[1]: Started libpod-conmon-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope.
Oct  9 09:36:01 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.913560515 +0000 UTC m=+0.083005094 container init 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.923621437 +0000 UTC m=+0.093066007 container start 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.924927591 +0000 UTC m=+0.094372160 container attach 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.84829726 +0000 UTC m=+0.017741848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct  9 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]:                            [--no-systemd] [--no-tmpfs]
Oct  9 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  9 09:36:02 compute-2 podman[10933]: 2025-10-09 09:36:02.076339888 +0000 UTC m=+0.245784467 container died 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:02 compute-2 systemd[1]: libpod-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope: Deactivated successfully.
Oct  9 09:36:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed-merged.mount: Deactivated successfully.
Oct  9 09:36:02 compute-2 podman[10933]: 2025-10-09 09:36:02.103903748 +0000 UTC m=+0.273348316 container remove 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct  9 09:36:02 compute-2 systemd[1]: libpod-conmon-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope: Deactivated successfully.
Oct  9 09:36:02 compute-2 systemd[1]: Reloading.
Oct  9 09:36:02 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:02 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:02 compute-2 systemd[1]: Reloading.
Oct  9 09:36:02 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:02 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:02 compute-2 systemd[1]: Starting Ceph osd.2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.864901312 +0000 UTC m=+0.031767579 container create 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:02 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.906569742 +0000 UTC m=+0.073436019 container init 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.911756723 +0000 UTC m=+0.078622990 container start 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.913268515 +0000 UTC m=+0.080134782 container attach 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct  9 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.84986665 +0000 UTC m=+0.016732938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:03 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:03 compute-2 lvm[11189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:36:03 compute-2 lvm[11189]: VG ceph_vg0 finished
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 bash[11096]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  9 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  9 09:36:03 compute-2 bash[11096]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  9 09:36:03 compute-2 systemd[1]: libpod-01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852.scope: Deactivated successfully.
Oct  9 09:36:03 compute-2 podman[11096]: 2025-10-09 09:36:03.878771289 +0000 UTC m=+1.045637557 container died 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 09:36:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136-merged.mount: Deactivated successfully.
Oct  9 09:36:03 compute-2 podman[11096]: 2025-10-09 09:36:03.901857185 +0000 UTC m=+1.068723452 container remove 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.038969996 +0000 UTC m=+0.028044928 container create c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.075452434 +0000 UTC m=+0.064527386 container init c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct  9 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.080236798 +0000 UTC m=+0.069311740 container start c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct  9 09:36:04 compute-2 bash[11331]: c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000
Oct  9 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.028071163 +0000 UTC m=+0.017146115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:04 compute-2 systemd[1]: Started Ceph osd.2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:04 compute-2 ceph-osd[11347]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:04 compute-2 ceph-osd[11347]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct  9 09:36:04 compute-2 ceph-osd[11347]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.492698747 +0000 UTC m=+0.025939217 container create ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:36:04 compute-2 systemd[1]: Started libpod-conmon-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope.
Oct  9 09:36:04 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.545262593 +0000 UTC m=+0.078503074 container init ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.549619439 +0000 UTC m=+0.082859910 container start ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.550570543 +0000 UTC m=+0.083811014 container attach ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:04 compute-2 dazzling_goldberg[11458]: 167 167
Oct  9 09:36:04 compute-2 systemd[1]: libpod-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope: Deactivated successfully.
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.553531326 +0000 UTC m=+0.086771788 container died ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  9 09:36:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-114603ded65df7b6a3274e4fb53b85326f2a748227019c21abc721975f9576fa-merged.mount: Deactivated successfully.
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.571668018 +0000 UTC m=+0.104908489 container remove ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct  9 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.482085372 +0000 UTC m=+0.015325863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:04 compute-2 systemd[1]: libpod-conmon-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope: Deactivated successfully.
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.685114609 +0000 UTC m=+0.027497367 container create abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:04 compute-2 systemd[1]: Started libpod-conmon-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope.
Oct  9 09:36:04 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.734005037 +0000 UTC m=+0.076387805 container init abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.74202965 +0000 UTC m=+0.084412408 container start abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.744788063 +0000 UTC m=+0.087170820 container attach abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.673266085 +0000 UTC m=+0.015648863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:05 compute-2 lvm[11576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:36:05 compute-2 lvm[11576]: VG ceph_vg0 finished
Oct  9 09:36:05 compute-2 ceph-osd[11347]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  9 09:36:05 compute-2 ceph-osd[11347]: load: jerasure load: lrc 
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:05 compute-2 mystifying_knuth[11500]: {}
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:05 compute-2 podman[11479]: 2025-10-09 09:36:05.253032971 +0000 UTC m=+0.595415730 container died abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:05 compute-2 systemd[1]: libpod-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope: Deactivated successfully.
Oct  9 09:36:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495-merged.mount: Deactivated successfully.
Oct  9 09:36:05 compute-2 podman[11479]: 2025-10-09 09:36:05.276725521 +0000 UTC m=+0.619108278 container remove abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  9 09:36:05 compute-2 systemd[1]: libpod-conmon-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope: Deactivated successfully.
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.69718869 +0000 UTC m=+0.025233838 container create 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:05 compute-2 systemd[1]: Started libpod-conmon-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope.
Oct  9 09:36:05 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.739687945 +0000 UTC m=+0.067733103 container init 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.744572687 +0000 UTC m=+0.072617836 container start 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.745893939 +0000 UTC m=+0.073939107 container attach 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  9 09:36:05 compute-2 ecstatic_pascal[11694]: 167 167
Oct  9 09:36:05 compute-2 systemd[1]: libpod-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope: Deactivated successfully.
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.748294125 +0000 UTC m=+0.076339273 container died 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct  9 09:36:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-52c2dddcdc46689bb0e0cd1f4c4b6de91cb29428d22f72cebd7a3332179f683e-merged.mount: Deactivated successfully.
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.766647286 +0000 UTC m=+0.094692434 container remove 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.687258362 +0000 UTC m=+0.015303530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:05 compute-2 ceph-osd[11347]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  9 09:36:05 compute-2 ceph-osd[11347]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:05 compute-2 systemd[1]: libpod-conmon-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope: Deactivated successfully.
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:05 compute-2 systemd[1]: Reloading.
Oct  9 09:36:05 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:05 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:06 compute-2 systemd[1]: Reloading.
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:06 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:06 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:06 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.mbbcec for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount shared_bdev_used = 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct  9 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Git sha 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB SUMMARY
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB Session ID:  70IKZGC0PAQBQGU5NYTU
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                     Options.env: 0x55bdd3549650
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                Options.info_log: 0x55bdd436f580
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.write_buffer_manager: 0x55bdd4462a00
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.row_cache: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.wal_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.wal_compression: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compression algorithms supported:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZSTD supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358b350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b80159d-2ad0-4081-a2fe-760c1c44de54
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566349086, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566349264, "job": 1, "event": "recovery_finished"}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: freelist init
Oct  9 09:36:06 compute-2 ceph-osd[11347]: freelist _read_cfg
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs umount
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) close
Oct  9 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.381922504 +0000 UTC m=+0.027410182 container create df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct  9 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.mbbcec supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.420775644 +0000 UTC m=+0.066263341 container init df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.424697501 +0000 UTC m=+0.070185178 container start df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  9 09:36:06 compute-2 bash[11980]: df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849
Oct  9 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.370976783 +0000 UTC m=+0.016464480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:06 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.mbbcec for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:06 compute-2 radosgw[12043]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:06 compute-2 radosgw[12043]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct  9 09:36:06 compute-2 radosgw[12043]: framework: beast
Oct  9 09:36:06 compute-2 radosgw[12043]: framework conf key: endpoint, val: 192.168.122.102:8082
Oct  9 09:36:06 compute-2 radosgw[12043]: init_numa not setting numa affinity
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluefs mount shared_bdev_used = 4718592
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: RocksDB version: 7.9.2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Git sha 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compile date 2025-07-17 03:12:14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB SUMMARY
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB Session ID:  70IKZGC0PAQBQGU5NYTV
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: CURRENT file:  CURRENT
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: IDENTITY file:  IDENTITY
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.error_if_exists: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.create_if_missing: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.paranoid_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                     Options.env: 0x55bdd35493b0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                Options.info_log: 0x55bdd436fe80
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_file_opening_threads: 16
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.statistics: (nil)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.use_fsync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.max_log_file_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.allow_fallocate: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_reads: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.create_missing_column_families: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.db_log_dir: 
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                 Options.wal_dir: db.wal
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.advise_random_on_open: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.write_buffer_manager: 0x55bdd4462aa0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                            Options.rate_limiter: (nil)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.unordered_write: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.row_cache: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.wal_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_ingest_behind: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.two_write_queues: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manual_wal_flush: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.wal_compression: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.atomic_flush: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.log_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_data_in_errors: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.db_host_id: __hostname__
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_jobs: 4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_compactions: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_subcompactions: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.max_open_files: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_background_flushes: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compression algorithms supported:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZSTD supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kXpressCompression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kBZip2Compression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kLZ4Compression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kZlibCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: #011kSnappyCompression supported: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a9b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a590#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a590#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55bdd358a590#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b80159d-2ad0-4081-a2fe-760c1c44de54
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566629184, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566630695, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566631615, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566632426, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566632986, "job": 1, "event": "recovery_finished"}
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdd46cfc00
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB pointer 0x55bdd46ae000
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  9 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usag
Oct  9 09:36:06 compute-2 ceph-osd[11347]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  9 09:36:06 compute-2 ceph-osd[11347]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  9 09:36:06 compute-2 ceph-osd[11347]: _get_class not permitted to load lua
Oct  9 09:36:06 compute-2 ceph-osd[11347]: _get_class not permitted to load sdk
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 load_pgs
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 load_pgs opened 0 pgs
Oct  9 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 log_to_monitors true
Oct  9 09:36:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:06.645+0000 7ff04a68c740 -1 osd.2 0 log_to_monitors true
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:07 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct  9 09:36:07 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  9 09:36:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct  9 09:36:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct  9 09:36:07 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  9 09:36:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 done with init, starting boot process
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 start_boot
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  9 09:36:08 compute-2 ceph-osd[11347]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  9 09:36:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:08 compute-2 radosgw[12043]: rgw main: failed to create zonegroup with (17) File exists
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.227277314 +0000 UTC m=+0.037194362 container create 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 86.253 iops: 22080.769 elapsed_sec: 0.136
Oct  9 09:36:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [WRN] : OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 0 waiting for initial osdmap
Oct  9 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:09.231+0000 7ff04660f640 -1 osd.2 0 waiting for initial osdmap
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 check_osdmap_features require_osd_release unknown -> squid
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 set_numa_affinity not setting numa affinity
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct  9 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:09.253+0000 7ff041c37640 -1 osd.2 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  9 09:36:09 compute-2 systemd[1]: Started libpod-conmon-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope.
Oct  9 09:36:09 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.285305475 +0000 UTC m=+0.095222533 container init 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.289721693 +0000 UTC m=+0.099638742 container start 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.291865576 +0000 UTC m=+0.101782644 container attach 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct  9 09:36:09 compute-2 sleepy_brahmagupta[12943]: 167 167
Oct  9 09:36:09 compute-2 systemd[1]: libpod-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope: Deactivated successfully.
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.293828378 +0000 UTC m=+0.103745425 container died 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:36:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-388b38f905a4b18fb893559c0ecacd1631e6a986cccebe9542fadc1b7fb41c17-merged.mount: Deactivated successfully.
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.209617772 +0000 UTC m=+0.019534840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.321179307 +0000 UTC m=+0.131096355 container remove 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:36:09 compute-2 systemd[1]: libpod-conmon-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope: Deactivated successfully.
Oct  9 09:36:09 compute-2 systemd[1]: Reloading.
Oct  9 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:09 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:09 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:09 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct  9 09:36:09 compute-2 ceph-osd[11347]: osd.2 34 state: booting -> active
Oct  9 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct  9 09:36:09 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:09 compute-2 systemd[1]: Reloading.
Oct  9 09:36:09 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:09 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:09 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.zfggbi for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.907911399 +0000 UTC m=+0.026801595 container create 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.zfggbi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.955029264 +0000 UTC m=+0.073919470 container init 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.959087166 +0000 UTC m=+0.077977362 container start 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:36:09 compute-2 bash[13073]: 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34
Oct  9 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.896821825 +0000 UTC m=+0.015712042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:09 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.zfggbi for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:09 compute-2 ceph-mds[13089]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:09 compute-2 ceph-mds[13089]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct  9 09:36:09 compute-2 ceph-mds[13089]: main not setting numa affinity
Oct  9 09:36:09 compute-2 ceph-mds[13089]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi[13085]: starting mds.cephfs.compute-2.zfggbi at 
Oct  9 09:36:09 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 2 from mon.0
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-mon[5983]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct  9 09:36:10 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct  9 09:36:10 compute-2 ceph-mon[5983]: OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  9 09:36:10 compute-2 ceph-mon[5983]: osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 3 from mon.0
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34) [2] r=0 lpr=35 pi=[11,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e3 new map
Oct  9 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e3 print_map#012e3#012btime 2025-10-09T09:36:10:513915+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:35:51.790428+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zfggbi{-1:14535} state up:standby seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34) [2] r=0 lpr=35 pi=[13,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Monitors have assigned me to become a standby
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34) [2] r=0 lpr=35 pi=[13,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34) [2] r=0 lpr=35 pi=[11,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 4 from mon.0
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map I am now mds.0.4
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x1
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x100
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x600
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x601
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x602
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x603
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x604
Oct  9 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e4 new map
Oct  9 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e4 print_map#012e4#012btime 2025-10-09T09:36:10:526987+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:10.526981+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.zfggbi{0:14535} state up:creating seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x605
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x606
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x607
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x608
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x609
Oct  9 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 creating_done
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct  9 09:36:11 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:11 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  9 09:36:11 compute-2 ceph-mon[5983]: daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  9 09:36:11 compute-2 ceph-mon[5983]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  9 09:36:11 compute-2 ceph-mon[5983]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  9 09:36:11 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct  9 09:36:11 compute-2 ceph-mon[5983]: daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  9 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  9 09:36:11 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 5 from mon.0
Oct  9 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map I am now mds.0.4
Oct  9 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  9 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 recovery_done -- successful recovery!
Oct  9 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 active_start
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e5 new map
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e5 print_map#012e5#012btime 2025-10-09T09:36:11:555720+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e6 new map
Oct  9 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e6 print_map#012e6#012btime 2025-10-09T09:36:11:561187+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct  9 09:36:12 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  9 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e7 new map
Oct  9 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e7 print_map#012e7#012btime 2025-10-09T09:36:12:564873+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:11.555718+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct  9 09:36:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct  9 09:36:13 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:13 compute-2 ceph-mon[5983]: Deploying daemon alertmanager.compute-0 on compute-0
Oct  9 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct  9 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct  9 09:36:14 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  9 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct  9 09:36:15 compute-2 ceph-mds[13089]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  9 09:36:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi[13085]: 2025-10-09T09:36:15.537+0000 7ff40218f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  9 09:36:15 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 8 from mon.0
Oct  9 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e8 new map
Oct  9 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e8 print_map#012e8#012btime 2025-10-09T09:36:15:540254+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:15 compute-2 radosgw[12043]: v1 topic migration: starting v1 topic migration..
Oct  9 09:36:15 compute-2 radosgw[12043]: LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec[12039]: 2025-10-09T09:36:15.598+0000 7feaf1e2d980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: v1 topic migration: finished v1 topic migration
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: framework: beast
Oct  9 09:36:15 compute-2 radosgw[12043]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  9 09:36:15 compute-2 radosgw[12043]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: starting handler: beast
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  9 09:36:15 compute-2 radosgw[12043]: set uid:gid to 167:167 (ceph:ceph)
Oct  9 09:36:15 compute-2 radosgw[12043]: mgrc service_daemon_register rgw.24283 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.mbbcec,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=773beadf-adcd-43ff-a482-a2d7a5b40bd8,zone_name=default,zonegroup_id=74fea7f9-d931-4447-a756-db2299521313,zonegroup_name=default}
Oct  9 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  9 09:36:16 compute-2 ceph-mon[5983]: Regenerating cephadm self-signed grafana TLS certificates
Oct  9 09:36:16 compute-2 ceph-mon[5983]: Deploying daemon grafana.compute-0 on compute-0
Oct  9 09:36:16 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e9 new map
Oct  9 09:36:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e9 print_map#012e9#012btime 2025-10-09T09:36:16:832969+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-09T09:35:51.790428+0000#012modified#0112025-10-09T09:36:14.585925+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=14535}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 14535 members: 14535#012[mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct  9 09:36:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:22 compute-2 ceph-mon[5983]: Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct  9 09:36:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:26 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:27 compute-2 ceph-mon[5983]: Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct  9 09:36:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:36:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:27.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.031044303 +0000 UTC m=+2.073946944 container create 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 systemd[1]: Started libpod-conmon-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope.
Oct  9 09:36:29 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.019898213 +0000 UTC m=+2.062800874 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.083021062 +0000 UTC m=+2.125923703 container init 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.087747856 +0000 UTC m=+2.130650497 container start 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.088797345 +0000 UTC m=+2.131699985 container attach 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 systemd[1]: libpod-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope: Deactivated successfully.
Oct  9 09:36:29 compute-2 conmon[13336]: conmon 0e904bbca9eaa00819fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope/container/memory.events
Oct  9 09:36:29 compute-2 epic_franklin[13336]: 0 0
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.09124959 +0000 UTC m=+2.134152231 container died 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 systemd[1]: var-lib-containers-storage-overlay-b47882e98ba870f162a06c619caeaf9062ee85ff21168da7ed7721e9b4dad5f1-merged.mount: Deactivated successfully.
Oct  9 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.10926881 +0000 UTC m=+2.152171451 container remove 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct  9 09:36:29 compute-2 systemd[1]: libpod-conmon-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope: Deactivated successfully.
Oct  9 09:36:29 compute-2 systemd[1]: Reloading.
Oct  9 09:36:29 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:29 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:29 compute-2 systemd[1]: Reloading.
Oct  9 09:36:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:29 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:29 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:29 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.gkeojf for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.696329843 +0000 UTC m=+0.026917372 container create 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da51568ca7bb05713a3e973fcd8e649070918a706c3c684eeb0b713f43496906/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.732226778 +0000 UTC m=+0.062814318 container init 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.736175144 +0000 UTC m=+0.066762674 container start 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:29 compute-2 bash[13470]: 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c
Oct  9 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.685371076 +0000 UTC m=+0.015958626 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:36:29 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.gkeojf for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf[13482]: [NOTICE] 281/093629 (2) : New worker #1 (4) forked
Oct  9 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:29.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:30 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:30 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:30 compute-2 ceph-mon[5983]: Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct  9 09:36:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:31.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:36:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:36:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.346337884 +0000 UTC m=+3.211536467 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.355420378 +0000 UTC m=+3.220618942 container create 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  9 09:36:33 compute-2 systemd[1]: Started libpod-conmon-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope.
Oct  9 09:36:33 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.394167345 +0000 UTC m=+3.259365929 container init 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=)
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.398734793 +0000 UTC m=+3.263933356 container start 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.399659342 +0000 UTC m=+3.264857895 container attach 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2)
Oct  9 09:36:33 compute-2 trusting_cerf[13656]: 0 0
Oct  9 09:36:33 compute-2 systemd[1]: libpod-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope: Deactivated successfully.
Oct  9 09:36:33 compute-2 conmon[13656]: conmon 61cf5003552e9090f4a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope/container/memory.events
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.403451903 +0000 UTC m=+3.268650466 container died 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct  9 09:36:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-9404fe28d29a4813fcec7025c71e249f7a0f9fadff88ee9928052bf3728563b7-merged.mount: Deactivated successfully.
Oct  9 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.430379872 +0000 UTC m=+3.295578435 container remove 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Oct  9 09:36:33 compute-2 systemd[1]: libpod-conmon-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope: Deactivated successfully.
Oct  9 09:36:33 compute-2 systemd[1]: Reloading.
Oct  9 09:36:33 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:33 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:33 compute-2 systemd[1]: Reloading.
Oct  9 09:36:33 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:33 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:33.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:33 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.tcjodw for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.053476042 +0000 UTC m=+0.027770684 container create a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git)
Oct  9 09:36:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709a7a8332ebb2081a749167efe33fdf7251040b3cf49bf74e854e3ff5ef17ef/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.092480777 +0000 UTC m=+0.066775419 container init a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Oct  9 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.097493216 +0000 UTC m=+0.071787857 container start a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=2.2.4, distribution-scope=public, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Oct  9 09:36:34 compute-2 bash[13791]: a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b
Oct  9 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.042148262 +0000 UTC m=+0.016442914 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:36:34 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.tcjodw for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Starting VRRP child process, pid=4
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Startup complete
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: VRRP_Script(check_backend) succeeded
Oct  9 09:36:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:35 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:36:35 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:36:35 compute-2 ceph-mon[5983]: Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct  9 09:36:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:35.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:37 2025: (VI_0) Entering MASTER STATE
Oct  9 09:36:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:39 compute-2 ceph-mon[5983]: Deploying daemon prometheus.compute-0 on compute-0
Oct  9 09:36:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:41.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:41 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct  9 09:36:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:41 2025: (VI_0) Entering BACKUP STATE
Oct  9 09:36:41 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:36:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:36:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct  9 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct  9 09:36:44 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Oct  9 09:36:44 compute-2 systemd[1]: session-18.scope: Consumed 16.429s CPU time.
Oct  9 09:36:44 compute-2 systemd-logind[800]: Session 18 logged out. Waiting for processes to exit.
Oct  9 09:36:44 compute-2 systemd-logind[800]: Removed session 18.
Oct  9 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct  9 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct  9 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.174+0000 7fa874c23140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct  9 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.246+0000 7fa874c23140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct  9 09:36:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct  9 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.926+0000 7fa874c23140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  9 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct  9 09:36:44 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct  9 09:36:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.469+0000 7fa874c23140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:  from numpy import show_config as show_numpy_config
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.609+0000 7fa874c23140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.671+0000 7fa874c23140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct  9 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.794+0000 7fa874c23140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  9 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct  9 09:36:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:36:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:45.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct  9 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.657+0000 7fa874c23140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct  9 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.846+0000 7fa874c23140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct  9 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.913+0000 7fa874c23140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct  9 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.971+0000 7fa874c23140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  9 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct  9 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.040+0000 7fa874c23140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct  9 09:36:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:47.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.102+0000 7fa874c23140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct  9 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.401+0000 7fa874c23140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct  9 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.489+0000 7fa874c23140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct  9 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.870+0000 7fa874c23140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  9 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct  9 09:36:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.353+0000 7fa874c23140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.415+0000 7fa874c23140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.488+0000 7fa874c23140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.616+0000 7fa874c23140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.678+0000 7fa874c23140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct  9 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.811+0000 7fa874c23140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  9 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.001+0000 7fa874c23140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct  9 09:36:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.231+0000 7fa874c23140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.291+0000 7fa874c23140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: prometheus
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] Starting engine...
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: CherryPy Checker:
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: The Application mounted at '' has an empty config.
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x563c932d3860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct  9 09:36:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct  9 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct  9 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] Engine started.
Oct  9 09:36:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct  9 09:36:49 compute-2 systemd-logind[800]: New session 20 of user ceph-admin.
Oct  9 09:36:49 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Oct  9 09:36:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:50 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct  9 09:36:50 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct  9 09:36:50 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct  9 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct  9 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct  9 09:36:50 compute-2 podman[13986]: 2025-10-09 09:36:50.38594812 +0000 UTC m=+0.039719176 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:50 compute-2 podman[13986]: 2025-10-09 09:36:50.465188537 +0000 UTC m=+0.118959594 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct  9 09:36:50 compute-2 podman[14065]: 2025-10-09 09:36:50.714195842 +0000 UTC m=+0.037197558 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:50 compute-2 podman[14065]: 2025-10-09 09:36:50.721044312 +0000 UTC m=+0.044046018 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:36:51 compute-2 podman[14166]: 2025-10-09 09:36:51.010899868 +0000 UTC m=+0.036661975 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:51 compute-2 podman[14184]: 2025-10-09 09:36:51.068940989 +0000 UTC m=+0.045560352 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:51 compute-2 podman[14166]: 2025-10-09 09:36:51.071666424 +0000 UTC m=+0.097428511 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:36:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:51 compute-2 podman[14217]: 2025-10-09 09:36:51.204529027 +0000 UTC m=+0.034379410 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  9 09:36:51 compute-2 podman[14217]: 2025-10-09 09:36:51.214041555 +0000 UTC m=+0.043891918 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Oct  9 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct  9 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct  9 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct  9 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct  9 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  9 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:51.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.357367571 +0000 UTC m=+0.027395149 container create 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct  9 09:36:52 compute-2 systemd[1]: Started libpod-conmon-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope.
Oct  9 09:36:52 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.418263043 +0000 UTC m=+0.088290621 container init 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.422611906 +0000 UTC m=+0.092639484 container start 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.423868726 +0000 UTC m=+0.093896304 container attach 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:52 compute-2 gracious_franklin[14481]: 167 167
Oct  9 09:36:52 compute-2 systemd[1]: libpod-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope: Deactivated successfully.
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.425301588 +0000 UTC m=+0.095329166 container died 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:36:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-77b14ccaac014a0e6e816b29b44ce39f61e6f15deb1440d8a0442f348e379a4b-merged.mount: Deactivated successfully.
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.441386385 +0000 UTC m=+0.111413963 container remove 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  9 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.344605641 +0000 UTC m=+0.014633240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:52 compute-2 systemd[1]: libpod-conmon-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope: Deactivated successfully.
Oct  9 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.551313763 +0000 UTC m=+0.025988046 container create afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:36:52 compute-2 systemd[1]: Started libpod-conmon-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope.
Oct  9 09:36:52 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.603137455 +0000 UTC m=+0.077811758 container init afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.610489401 +0000 UTC m=+0.085163684 container start afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.611468747 +0000 UTC m=+0.086143031 container attach afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.541299947 +0000 UTC m=+0.015974249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-2 hardcore_jang[14516]: [
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:    {
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "available": false,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "being_replaced": false,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "ceph_device_lvm": false,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "lsm_data": {},
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "lvs": [],
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "path": "/dev/sr0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "rejected_reasons": [
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "Has a FileSystem",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "Insufficient space (<5GB)"
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        ],
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        "sys_api": {
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "actuators": null,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "device_nodes": [
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:                "sr0"
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            ],
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "devname": "sr0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "human_readable_size": "474.00 KB",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "id_bus": "ata",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "model": "QEMU DVD-ROM",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "nr_requests": "64",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "parent": "/dev/sr0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "partitions": {},
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "path": "/dev/sr0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "removable": "1",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "rev": "2.5+",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "ro": "0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "rotational": "0",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "sas_address": "",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "sas_device_handle": "",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "scheduler_mode": "mq-deadline",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "sectors": 0,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "sectorsize": "2048",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "size": 485376.0,
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "support_discard": "2048",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "type": "disk",
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:            "vendor": "QEMU"
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:        }
Oct  9 09:36:53 compute-2 hardcore_jang[14516]:    }
Oct  9 09:36:53 compute-2 hardcore_jang[14516]: ]
Oct  9 09:36:53 compute-2 systemd[1]: libpod-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope: Deactivated successfully.
Oct  9 09:36:53 compute-2 podman[15522]: 2025-10-09 09:36:53.086687632 +0000 UTC m=+0.017785335 container died afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d-merged.mount: Deactivated successfully.
Oct  9 09:36:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:53 compute-2 podman[15522]: 2025-10-09 09:36:53.106379772 +0000 UTC m=+0.037477465 container remove afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:36:53 compute-2 systemd[1]: libpod-conmon-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope: Deactivated successfully.
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:36:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:53.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:36:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress#012service_id: nfs.cephfs#012service_name: ingress.nfs.cephfs#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012spec:#012  backend_service: nfs.cephfs#012  enable_haproxy_protocol: true#012  first_virtual_router_id: 50#012  frontend_port: 2049#012  monitor_port: 9049#012  virtual_ip: 192.168.122.2/24#012''')): max() arg is an empty sequence#012Traceback (most recent call last):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services#012    if self._apply_service(spec):#012  File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service#012    daemon_spec = svc.prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create#012    return self.haproxy_prepare_create(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create#012    daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)#012  File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config#012    num_ranks = 1 + max(by_rank.keys())#012ValueError: max() arg is an empty sequence
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct  9 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct  9 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct  9 09:36:55 compute-2 ceph-mon[5983]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct  9 09:36:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:36:56 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:36:56 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:36:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct  9 09:36:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct  9 09:36:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.571588428 +0000 UTC m=+0.042799462 container create b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:36:59 compute-2 systemd[1]: Started libpod-conmon-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope.
Oct  9 09:36:59 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.634185629 +0000 UTC m=+0.105396674 container init b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.63943006 +0000 UTC m=+0.110641095 container start b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.640948594 +0000 UTC m=+0.112159629 container attach b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 09:36:59 compute-2 jovial_dubinsky[16528]: 167 167
Oct  9 09:36:59 compute-2 systemd[1]: libpod-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope: Deactivated successfully.
Oct  9 09:36:59 compute-2 conmon[16528]: conmon b9b0dfff2e84c7994227 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope/container/memory.events
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.643511168 +0000 UTC m=+0.114722293 container died b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.555210168 +0000 UTC m=+0.026421213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:36:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-d0d0462144776940901ae48d3e0fab9a51ccc38b79b2ec79bf4dc91101dcf235-merged.mount: Deactivated successfully.
Oct  9 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.662524126 +0000 UTC m=+0.133735161 container remove b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:36:59 compute-2 systemd[1]: libpod-conmon-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope: Deactivated successfully.
Oct  9 09:36:59 compute-2 systemd[1]: Reloading.
Oct  9 09:36:59 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:36:59 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:36:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:36:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:36:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:36:59 compute-2 systemd[1]: Reloading.
Oct  9 09:37:00 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:00 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:00 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.343970736 +0000 UTC m=+0.032493446 container create 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.398419889 +0000 UTC m=+0.086942599 container init 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Oct  9 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.405935594 +0000 UTC m=+0.094458304 container start 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct  9 09:37:00 compute-2 bash[16658]: 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212
Oct  9 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.330292999 +0000 UTC m=+0.018815729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:00 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:01 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:01 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct  9 09:37:01 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct  9 09:37:01 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct  9 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct  9 09:37:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:01.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:01.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:02 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct  9 09:37:02 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct  9 09:37:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct  9 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct  9 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  9 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  9 09:37:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:05 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct  9 09:37:05 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct  9 09:37:05 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct  9 09:37:05 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:05.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.044410) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626044515, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5741, "num_deletes": 259, "total_data_size": 19312681, "memory_usage": 20425528, "flush_reason": "Manual Compaction"}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626070152, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12329919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5746, "table_properties": {"data_size": 12308316, "index_size": 13617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6917, "raw_key_size": 66691, "raw_average_key_size": 24, "raw_value_size": 12254607, "raw_average_value_size": 4449, "num_data_blocks": 604, "num_entries": 2754, "num_filter_entries": 2754, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 1760002514, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 25784 microseconds, and 18579 cpu microseconds.
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070204) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12329919 bytes OK
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070224) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070644) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070658) EVENT_LOG_v1 {"time_micros": 1760002626070654, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19281571, prev total WAL file size 19283475, number of live WAL files 2.
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.073383) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626073482, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12331567, "oldest_snapshot_seqno": -1}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2498 keys, 12325993 bytes, temperature: kUnknown
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626101831, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12325993, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12305072, "index_size": 13580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6277, "raw_key_size": 63134, "raw_average_key_size": 25, "raw_value_size": 12254665, "raw_average_value_size": 4905, "num_data_blocks": 602, "num_entries": 2498, "num_filter_entries": 2498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102167) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12325993 bytes
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102574) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 432.0 rd, 431.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.8, 0.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2759, records dropped: 261 output_compression: NoCompression
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102591) EVENT_LOG_v1 {"time_micros": 1760002626102582, "job": 4, "event": "compaction_finished", "compaction_time_micros": 28545, "compaction_time_cpu_micros": 20374, "output_level": 6, "num_output_files": 1, "total_output_size": 12325993, "num_input_records": 2759, "num_output_records": 2498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626104623, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626104852, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  9 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.073317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000002:nfs.cephfs.1: -2
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:07.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:08 compute-2 podman[16884]: 2025-10-09 09:37:08.786598754 +0000 UTC m=+0.056258858 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:37:08 compute-2 podman[16884]: 2025-10-09 09:37:08.884239789 +0000 UTC m=+0.153899882 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:09 compute-2 podman[16964]: 2025-10-09 09:37:09.206057171 +0000 UTC m=+0.039956571 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-2 podman[16964]: 2025-10-09 09:37:09.215108703 +0000 UTC m=+0.049008082 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:09 compute-2 podman[17065]: 2025-10-09 09:37:09.6093714 +0000 UTC m=+0.044963834 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:09 compute-2 podman[17065]: 2025-10-09 09:37:09.615009705 +0000 UTC m=+0.050602119 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:09 compute-2 podman[17118]: 2025-10-09 09:37:09.774686021 +0000 UTC m=+0.038175151 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  9 09:37:09 compute-2 podman[17118]: 2025-10-09 09:37:09.786066375 +0000 UTC m=+0.049555485 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Oct  9 09:37:09 compute-2 podman[17159]: 2025-10-09 09:37:09.921733389 +0000 UTC m=+0.048048101 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:37:09 compute-2 podman[17159]: 2025-10-09 09:37:09.931079697 +0000 UTC m=+0.057394398 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct  9 09:37:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:09.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:11 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct  9 09:37:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:11.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:12 compute-2 ceph-mon[5983]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct  9 09:37:12 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct  9 09:37:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:13.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:14 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:14 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct  9 09:37:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:15.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.194713568 +0000 UTC m=+0.029223006 container create cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 systemd[1]: Started libpod-conmon-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope.
Oct  9 09:37:15 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.271511813 +0000 UTC m=+0.106021270 container init cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.276943547 +0000 UTC m=+0.111452986 container start cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.278447103 +0000 UTC m=+0.112956561 container attach cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.182444128 +0000 UTC m=+0.016953586 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:15 compute-2 silly_mayer[17289]: 0 0
Oct  9 09:37:15 compute-2 systemd[1]: libpod-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope: Deactivated successfully.
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.281688357 +0000 UTC m=+0.116197795 container died cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-56464be254efef73de856d2c5f47a0165c4aa2921d827a7ee89356e7e23458d2-merged.mount: Deactivated successfully.
Oct  9 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.304270738 +0000 UTC m=+0.138780175 container remove cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct  9 09:37:15 compute-2 systemd[1]: libpod-conmon-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope: Deactivated successfully.
Oct  9 09:37:15 compute-2 systemd[1]: Reloading.
Oct  9 09:37:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:15 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4001e10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:15 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:15 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:15 compute-2 systemd[1]: Reloading.
Oct  9 09:37:15 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:15 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:15 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct  9 09:37:15 compute-2 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-2.iyubhq for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.034510303 +0000 UTC m=+0.031441241 container create 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct  9 09:37:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b1420b5cf7143ab4f3c5e5dc855f602e24e738357efff898f85e0792ec0e22/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.076776298 +0000 UTC m=+0.073707246 container init 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct  9 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.080636498 +0000 UTC m=+0.077567436 container start 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct  9 09:37:16 compute-2 bash[17423]: 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110
Oct  9 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.02226161 +0000 UTC m=+0.019192569 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct  9 09:37:16 compute-2 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-2.iyubhq for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/093716 (2) : New worker #1 (4) forked
Oct  9 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093716 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:16 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.606012763 +0000 UTC m=+0.040425206 container create 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4)
Oct  9 09:37:16 compute-2 systemd[1]: Started libpod-conmon-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope.
Oct  9 09:37:16 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.673493062 +0000 UTC m=+0.107905516 container init 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=2.2.4, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.679915526 +0000 UTC m=+0.114327959 container start 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=keepalived for Ceph, version=2.2.4, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc.)
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.681182265 +0000 UTC m=+0.115594699 container attach 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793)
Oct  9 09:37:16 compute-2 lucid_allen[17543]: 0 0
Oct  9 09:37:16 compute-2 systemd[1]: libpod-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope: Deactivated successfully.
Oct  9 09:37:16 compute-2 conmon[17543]: conmon 96d5182dbb98376f6e36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope/container/memory.events
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.685150299 +0000 UTC m=+0.119562731 container died 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.593453475 +0000 UTC m=+0.027865928 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-c896448760f11553d02f71f05001b6463caea94839919dbca1cd253a2a9b6cf0-merged.mount: Deactivated successfully.
Oct  9 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.70754713 +0000 UTC m=+0.141959564 container remove 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct  9 09:37:16 compute-2 systemd[1]: libpod-conmon-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope: Deactivated successfully.
Oct  9 09:37:16 compute-2 systemd[1]: Reloading.
Oct  9 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:16 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:16 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:16 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:17 compute-2 systemd[1]: Reloading.
Oct  9 09:37:17 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:37:17 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:37:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:17 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct  9 09:37:17 compute-2 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-2.dgxvnq for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.392930524 +0000 UTC m=+0.030407620 container create 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:17 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4002a90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4679a2aa40a070805f9d9565e83053f2661c47dd86137a73c7583a1cb944b8e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.438737899 +0000 UTC m=+0.076214995 container init 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct  9 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.4426722 +0000 UTC m=+0.080149286 container start 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct  9 09:37:17 compute-2 bash[17679]: 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f
Oct  9 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.381532166 +0000 UTC m=+0.019009282 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct  9 09:37:17 compute-2 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-2.dgxvnq for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Configuration file /etc/keepalived/keepalived.conf
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Starting VRRP child process, pid=4
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Startup complete
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: (VI_0) Entering BACKUP STATE (init)
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: VRRP_Script(check_backend) succeeded
Oct  9 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:17.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:18 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4002a90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:18 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct  9 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:18 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:19.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:19 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:20 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:20 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:21 2025: (VI_0) Entering MASTER STATE
Oct  9 09:37:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:21 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:22 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct  9 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct  9 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct  9 09:37:22 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:22 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:23.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:23 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:24 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:24 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:24 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 90, forcing new election
Oct  9 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:25.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:25 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be00054f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:25 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:25 compute-2 systemd-logind[800]: New session 21 of user zuul.
Oct  9 09:37:25 compute-2 systemd[1]: Started Session 21 of User zuul.
Oct  9 09:37:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:25.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) Entering BACKUP STATE
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:26 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:26 compute-2 podman[17895]: 2025-10-09 09:37:26.331555395 +0000 UTC m=+0.041545367 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:37:26 compute-2 podman[17895]: 2025-10-09 09:37:26.451103358 +0000 UTC m=+0.161093310 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Entering MASTER STATE
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Entering BACKUP STATE
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:26 compute-2 podman[18074]: 2025-10-09 09:37:26.738997178 +0000 UTC m=+0.042117856 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-2 podman[18074]: 2025-10-09 09:37:26.755282144 +0000 UTC m=+0.058402820 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:26 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:26 compute-2 python3.9[18024]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:27 compute-2 podman[18195]: 2025-10-09 09:37:27.137417839 +0000 UTC m=+0.051183656 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:37:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:27.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:37:27 compute-2 podman[18195]: 2025-10-09 09:37:27.143478719 +0000 UTC m=+0.057244546 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:27 compute-2 podman[18260]: 2025-10-09 09:37:27.386055721 +0000 UTC m=+0.047787189 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph)
Oct  9 09:37:27 compute-2 podman[18260]: 2025-10-09 09:37:27.424184524 +0000 UTC m=+0.085915991 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.buildah.version=1.28.2, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public)
Oct  9 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:27 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:27 compute-2 podman[18334]: 2025-10-09 09:37:27.594144035 +0000 UTC m=+0.057035883 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  9 09:37:27 compute-2 podman[18334]: 2025-10-09 09:37:27.605070212 +0000 UTC m=+0.067962050 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 09:37:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:28 compute-2 python3.9[18536]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:29 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:30 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:30 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:31.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:32 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:32 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-0 (monmap changed)...
Oct  9 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  9 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct  9 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  9 09:37:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:33 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring crash.compute-0 (monmap changed)...
Oct  9 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring daemon crash.compute-0 on compute-0
Oct  9 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring osd.1 (monmap changed)...
Oct  9 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring daemon osd.1 on compute-0
Oct  9 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:37:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:34 compute-2 systemd[1]: session-21.scope: Deactivated successfully.
Oct  9 09:37:34 compute-2 systemd[1]: session-21.scope: Consumed 6.914s CPU time.
Oct  9 09:37:34 compute-2 systemd-logind[800]: Session 21 logged out. Waiting for processes to exit.
Oct  9 09:37:34 compute-2 systemd-logind[800]: Removed session 21.
Oct  9 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct  9 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct  9 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct  9 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct  9 09:37:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:35.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:35 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:37:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:35.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:37:36 compute-2 ceph-mon[5983]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct  9 09:37:36 compute-2 ceph-mon[5983]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct  9 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:36 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:36 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring crash.compute-1 (monmap changed)...
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  9 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring daemon crash.compute-1 on compute-1
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring osd.0 (monmap changed)...
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  9 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring daemon osd.0 on compute-1
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:37 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.864053442 +0000 UTC m=+0.032967779 container create 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:37:37 compute-2 systemd[1]: Started libpod-conmon-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope.
Oct  9 09:37:37 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.920158284 +0000 UTC m=+0.089072631 container init 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.926071942 +0000 UTC m=+0.094986280 container start 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.927387764 +0000 UTC m=+0.096302101 container attach 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:37 compute-2 charming_leavitt[18734]: 167 167
Oct  9 09:37:37 compute-2 systemd[1]: libpod-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope: Deactivated successfully.
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.931540738 +0000 UTC m=+0.100455075 container died 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct  9 09:37:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-aa25ae9fce5831dab22c2eded0dabdd34c56215cad572c4c4d539f266a5536ea-merged.mount: Deactivated successfully.
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.850498305 +0000 UTC m=+0.019412641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.954522556 +0000 UTC m=+0.123436893 container remove 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:37:37 compute-2 systemd[1]: libpod-conmon-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope: Deactivated successfully.
Oct  9 09:37:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:37.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:38 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.380076043 +0000 UTC m=+0.033320147 container create e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct  9 09:37:38 compute-2 systemd[1]: Started libpod-conmon-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope.
Oct  9 09:37:38 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.435670557 +0000 UTC m=+0.088914682 container init e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.440721051 +0000 UTC m=+0.093965156 container start e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.441888932 +0000 UTC m=+0.095133037 container attach e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:38 compute-2 charming_mcnulty[18826]: 167 167
Oct  9 09:37:38 compute-2 systemd[1]: libpod-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope: Deactivated successfully.
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.444653797 +0000 UTC m=+0.097897902 container died e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 09:37:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-839e22ce41db17ec9436139cd35ba74a9a9ed8d4db44de8a2ce947b27f2ccd8d-merged.mount: Deactivated successfully.
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.366226547 +0000 UTC m=+0.019470672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.466683673 +0000 UTC m=+0.119927779 container remove e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:37:38 compute-2 systemd[1]: libpod-conmon-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope: Deactivated successfully.
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-1 (monmap changed)...
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-1 on compute-1
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-2 (monmap changed)...
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-2 on compute-2
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct  9 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  9 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct  9 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:38 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-2 podman[18947]: 2025-10-09 09:37:39.127521398 +0000 UTC m=+0.041508391 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:37:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:39 compute-2 podman[18947]: 2025-10-09 09:37:39.222142784 +0000 UTC m=+0.136129758 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  9 09:37:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:39 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct  9 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:39 compute-2 podman[19028]: 2025-10-09 09:37:39.521919266 +0000 UTC m=+0.043390997 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-2 podman[19028]: 2025-10-09 09:37:39.554232876 +0000 UTC m=+0.075704587 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:39 compute-2 podman[19127]: 2025-10-09 09:37:39.929325708 +0000 UTC m=+0.042786913 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:39 compute-2 podman[19127]: 2025-10-09 09:37:39.943015382 +0000 UTC m=+0.056476577 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:37:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:40 compute-2 podman[19178]: 2025-10-09 09:37:40.10992292 +0000 UTC m=+0.044212562 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Oct  9 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093740 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:37:40 compute-2 podman[19178]: 2025-10-09 09:37:40.118720559 +0000 UTC m=+0.053010181 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Oct  9 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:40 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:40 compute-2 podman[19220]: 2025-10-09 09:37:40.26313746 +0000 UTC m=+0.047549472 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  9 09:37:40 compute-2 podman[19220]: 2025-10-09 09:37:40.272100923 +0000 UTC m=+0.056512935 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:40 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:37:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:41 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:42 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:42 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:43 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.817194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663817250, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1463, "num_deletes": 251, "total_data_size": 4339746, "memory_usage": 4409408, "flush_reason": "Manual Compaction"}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663823667, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2428875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5751, "largest_seqno": 7209, "table_properties": {"data_size": 2422768, "index_size": 3178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14731, "raw_average_key_size": 20, "raw_value_size": 2409516, "raw_average_value_size": 3318, "num_data_blocks": 147, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002626, "oldest_key_time": 1760002626, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6499 microseconds, and 4697 cpu microseconds.
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823702) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2428875 bytes OK
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823715) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824065) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824077) EVENT_LOG_v1 {"time_micros": 1760002663824073, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4332458, prev total WAL file size 4332458, number of live WAL files 2.
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2371KB)], [15(11MB)]
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663824932, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14754868, "oldest_snapshot_seqno": -1}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2698 keys, 13382042 bytes, temperature: kUnknown
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663857427, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13382042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13359867, "index_size": 14322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 68558, "raw_average_key_size": 25, "raw_value_size": 13305771, "raw_average_value_size": 4931, "num_data_blocks": 634, "num_entries": 2698, "num_filter_entries": 2698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857604) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13382042 bytes
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 453.3 rd, 411.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.5) OK, records in: 3224, records dropped: 526 output_compression: NoCompression
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858003) EVENT_LOG_v1 {"time_micros": 1760002663857995, "job": 6, "event": "compaction_finished", "compaction_time_micros": 32552, "compaction_time_cpu_micros": 19888, "output_level": 6, "num_output_files": 1, "total_output_size": 13382042, "num_input_records": 3224, "num_output_records": 2698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663858460, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663859897, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:37:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:43.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:44 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:44 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf8001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:45 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:45.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:46 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:46 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000037s ======
Oct  9 09:37:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000037s
Oct  9 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:47 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf8001bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:47.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:48 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:48 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy ignored for local
Oct  9 09:37:48 compute-2 kernel: ganesha.nfsd[17190]: segfault at 50 ip 00007f0c9e66a32e sp 00007f0c56ffc210 error 4 in libntirpc.so.5.8[7f0c9e64f000+2c000] likely on CPU 2 (core 0, socket 2)
Oct  9 09:37:48 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:37:48 compute-2 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct  9 09:37:48 compute-2 systemd[1]: Started Process Core Dump (PID 19337/UID 0).
Oct  9 09:37:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:49 compute-2 systemd-coredump[19338]: Process 16674 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f0c9e66a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:37:49 compute-2 systemd[1]: systemd-coredump@0-19337-0.service: Deactivated successfully.
Oct  9 09:37:49 compute-2 podman[19347]: 2025-10-09 09:37:49.913263901 +0000 UTC m=+0.022600869 container died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct  9 09:37:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7-merged.mount: Deactivated successfully.
Oct  9 09:37:49 compute-2 podman[19347]: 2025-10-09 09:37:49.932730062 +0000 UTC m=+0.042067040 container remove 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  9 09:37:49 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:37:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:37:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:37:50 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct  9 09:37:50 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.144s CPU time.
Oct  9 09:37:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct  9 09:37:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093750 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:50 compute-2 systemd-logind[800]: New session 22 of user zuul.
Oct  9 09:37:50 compute-2 systemd[1]: Started Session 22 of User zuul.
Oct  9 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct  9 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:51 compute-2 python3.9[19534]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  9 09:37:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=12.595129967s) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 117.914047241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=12.595129967s) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown pruub 117.914047241s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:52.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:52 compute-2 python3.9[19709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  9 09:37:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  9 09:37:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct  9 09:37:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct  9 09:37:53 compute-2 python3.9[19866]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:37:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:54.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct  9 09:37:54 compute-2 python3.9[20020]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:37:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  9 09:37:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.346458435s) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active pruub 117.914161682s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.346458435s) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown pruub 117.914161682s@ mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:37:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093754 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:37:54 compute-2 python3.9[20175]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:37:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=47/49 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  9 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  9 09:37:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  9 09:37:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:55 compute-2 python3.9[20325]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:37:55 compute-2 network[20343]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:37:55 compute-2 network[20344]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:37:55 compute-2 network[20345]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:37:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct  9 09:37:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:56.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct  9 09:37:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct  9 09:37:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Oct  9 09:37:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Oct  9 09:37:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Oct  9 09:37:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Oct  9 09:37:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct  9 09:37:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct  9 09:37:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct  9 09:37:58 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:58 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  9 09:37:58 compute-2 python3.9[20610]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:37:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:58 compute-2 python3.9[20761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:37:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct  9 09:37:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  9 09:37:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  9 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  9 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:37:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:37:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:37:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:37:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:37:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:37:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:00.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:00 compute-2 python3.9[20916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:38:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct  9 09:38:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Oct  9 09:38:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:00 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 1.
Oct  9 09:38:00 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:38:00 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.144s CPU time.
Oct  9 09:38:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Oct  9 09:38:00 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.318891622 +0000 UTC m=+0.026861276 container create 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct  9 09:38:00 compute-2 systemd[1270]: Created slice User Background Tasks Slice.
Oct  9 09:38:00 compute-2 systemd[9018]: Starting Mark boot as successful...
Oct  9 09:38:00 compute-2 systemd[1270]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 09:38:00 compute-2 systemd[9018]: Finished Mark boot as successful.
Oct  9 09:38:00 compute-2 systemd[1270]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.359647497 +0000 UTC m=+0.067617152 container init 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.364087837 +0000 UTC m=+0.072057490 container start 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:38:00 compute-2 bash[20987]: 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac
Oct  9 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.307808074 +0000 UTC m=+0.015777748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:38:00 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:00 compute-2 python3.9[21170]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:38:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  9 09:38:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  9 09:38:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct  9 09:38:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:38:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:38:01 compute-2 python3.9[21254]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:38:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:38:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:38:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct  9 09:38:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct  9 09:38:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  9 09:38:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct  9 09:38:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct  9 09:38:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct  9 09:38:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  9 09:38:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  9 09:38:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  9 09:38:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  9 09:38:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  9 09:38:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  9 09:38:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  9 09:38:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:38:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.1f( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.18( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1a( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.14( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.3( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.9( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.4( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.2( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1d( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.7( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.867207527s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440948486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.13( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.867181778s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440948486s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.857445717s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431503296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866744995s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440979004s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.857426643s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431503296s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866710663s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440979004s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856984138s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866912842s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441360474s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866850853s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441360474s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856917381s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856478691s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866047859s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441009521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866032600s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441009521s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865854263s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440994263s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865835190s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440994263s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856459618s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855997086s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855927467s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431396484s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855923653s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855903625s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431396484s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855646133s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431381226s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865549088s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441314697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855628014s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431381226s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865533829s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441314697s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855484009s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855466843s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865333557s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441345215s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865318298s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441345215s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855165482s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431381226s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855087280s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431381226s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855023384s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431350708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855003357s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431350708s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864980698s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441299438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854892731s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864933968s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441482544s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864919662s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441482544s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864726067s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441452026s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864631653s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441299438s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864713669s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441452026s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852690697s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429489136s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852668762s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429489136s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854758263s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854465485s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431365967s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854315758s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431365967s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864445686s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441574097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852210045s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429473877s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864321709s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441574097s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852190018s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429473877s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864096642s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441574097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864078522s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441574097s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854456902s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.853819847s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851585388s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863708496s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441543579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851563454s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863759995s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851435661s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429473877s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851415634s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429473877s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863520622s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863506317s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441650391s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851209641s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851190567s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863346100s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863320351s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441741943s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863275528s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441772461s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863257408s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441772461s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852544785s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431350708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852529526s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431350708s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862961769s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441543579s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863590240s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441650391s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850242615s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850221634s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850105286s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850074768s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862318993s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862718582s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441726685s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862301826s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441741943s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862284660s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441726685s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862169266s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441818237s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862150192s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441818237s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849740982s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849614143s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861989021s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441894531s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861925125s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441894531s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849368095s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861711502s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441833496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861655235s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441833496s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849140167s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861316681s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441940308s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861275673s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441940308s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.16( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.848836899s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429489136s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.848590851s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429489136s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1d( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.11( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.13( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1e( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.17( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.1d( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.15( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.14( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.2( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.3( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.19( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.16( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.8( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1f( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.1( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.6( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.9( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.a( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.3( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.5( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:08.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Oct  9 09:38:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Oct  9 09:38:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.17( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.1d( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.16( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.19( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.15( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.13( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1c( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1f( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.1c( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.1d( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.14( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.8( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.5( v 33'9 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.1f( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1d( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.8( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.b( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.14( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.11( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.5( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.13( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.f( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.3( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.9( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.d( v 50'68 lc 43'18 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.9( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.19( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.16( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.e( v 56'99 lc 40'80 (0'0,56'99] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.a( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.6( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.1f( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.16( v 50'68 lc 43'35 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.a( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.b( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.8( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.17( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.9( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.3( v 56'99 lc 40'84 (0'0,56'99] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.6( v 56'69 lc 43'49 (0'0,56'69] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=56'69 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.7( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.15( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=57/58 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.18( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.2( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.c( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.2( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.3( v 33'9 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.9( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.16( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.3( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct  9 09:38:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct  9 09:38:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct  9 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  9 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  9 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.f( v 41'42 lc 35'25 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+recovering rops=1 m=1 mbc={255={(0+1)=1}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:10.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=56'1062 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=56'1062 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.162775993s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.583831787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.162708282s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.583831787s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.162840843s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.584121704s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.162815094s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.584121704s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.163148880s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.584884644s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.163119316s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.584884644s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.166052818s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.587966919s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.165368080s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.587966919s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=59'1068 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.9 scrub starts
Oct  9 09:38:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.9 scrub ok
Oct  9 09:38:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct  9 09:38:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct  9 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  9 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  9 09:38:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:38:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:12.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:38:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct  9 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093812 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct  9 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct  9 09:38:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Oct  9 09:38:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Oct  9 09:38:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  9 09:38:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  9 09:38:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  9 09:38:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  9 09:38:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct  9 09:38:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:15.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct  9 09:38:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.1: -2
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct  9 09:38:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct  9 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50cf00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88001ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct  9 09:38:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct  9 09:38:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct  9 09:38:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:17.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct  9 09:38:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  9 09:38:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct  9 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:17 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  9 09:38:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:18 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  9 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50cf00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct  9 09:38:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct  9 09:38:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:19.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct  9 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  9 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  9 09:38:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:19 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093819 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:20.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.918619156s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=63'1069 lcod 63'1070 mlcod 63'1070 active pruub 148.426910400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.918568611s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 unknown NOTIFY pruub 148.426910400s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.074834824s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 41'42 active pruub 145.583831787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.074742317s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 145.583831787s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=15.917100906s) [1] r=-1 lpr=66 pi=[61,66)/1 crt=40'1059 mlcod 0'0 active pruub 149.426162720s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.917779922s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 active pruub 148.427368164s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.917766571s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.427368164s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=15.916796684s) [1] r=-1 lpr=66 pi=[61,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 149.426162720s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.912860870s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 active pruub 148.423599243s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.912676811s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.423599243s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.076931000s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 41'42 active pruub 145.588119507s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.076904297s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 145.588119507s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  9 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c008f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct  9 09:38:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=56'1062 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=56'1062 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c008f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=67/68 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=66'1068 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=67/68 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=63'1071 lcod 63'1070 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  9 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:21 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:22.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.004076004s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=63'1071 lcod 68'1073 mlcod 68'1073 active pruub 150.429351807s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.003991127s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=63'1071 lcod 68'1073 mlcod 0'0 unknown NOTIFY pruub 150.429351807s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.003008842s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429412842s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002962112s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429412842s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.002772331s) [1] async=[1] r=-1 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429428101s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.002682686s) [1] r=-1 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002573013s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429428101s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002515793s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Oct  9 09:38:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Oct  9 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct  9 09:38:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:38:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:38:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct  9 09:38:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct  9 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:23 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:24.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct  9 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:25 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  9 09:38:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  9 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.727802277s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.426910400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.727765083s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.426910400s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72 pruub=9.726618767s) [0] r=-1 lpr=72 pi=[61,72)/1 crt=40'1059 mlcod 0'0 active pruub 149.426177979s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72 pruub=9.726602554s) [0] r=-1 lpr=72 pi=[61,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 149.426177979s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.724181175s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.424346924s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.724164963s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.726887703s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.427352905s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.726868629s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.427352905s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  9 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Oct  9 09:38:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Oct  9 09:38:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  9 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:27 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  9 09:38:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  9 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct  9 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  9 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  9 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:38:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  9 09:38:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  9 09:38:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001338005s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.714279175s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000436783s) [0] async=[0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713455200s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000190735s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713485718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995488167s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.709838867s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:29 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:38:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:30.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:38:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  9 09:38:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  9 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:30 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct  9 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  9 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  9 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311874390s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 153.583740234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.154195786s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.426406860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.152080536s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.424911499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:31 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct  9 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  9 09:38:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  9 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:38:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:38:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:38:32 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct  9 09:38:32 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:32 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  9 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct  9 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001540184s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432449341s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001205444s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432983398s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  9 09:38:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  9 09:38:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  9 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:38:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:38:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct  9 09:38:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  9 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  9 09:38:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  9 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:38:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct  9 09:38:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830674171s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 active pruub 164.427581787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827572823s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 active pruub 157.424896240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  9 09:38:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  9 09:38:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  9 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:35 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:38:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:36.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:38:36 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct  9 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  9 09:38:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  9 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:37 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct  9 09:38:37 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:37 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  9 09:38:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  9 09:38:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:37.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:37 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:38 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct  9 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988700867s) [0] async=[0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.435379028s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.988006592s) [0] async=[0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.434722900s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:38 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Oct  9 09:38:38 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Oct  9 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct  9 09:38:39 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  9 09:38:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:38:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:38:39 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  9 09:38:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:39 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:40 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  9 09:38:40 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  9 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:38:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:38:41 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  9 09:38:41 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  9 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:41 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093841 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:42 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  9 09:38:42 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  9 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:42 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct  9 09:38:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  9 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  9 09:38:43 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  9 09:38:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  9 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:43 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:44 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  9 09:38:44 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  9 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct  9 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621111870s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 active pruub 167.427932739s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616195679s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 active pruub 166.424423218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  9 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:45 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct  9 09:38:45 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct  9 09:38:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct  9 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:45 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  9 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:45 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:46.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:46 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  9 09:38:46 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  9 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct  9 09:38:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:46 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:47 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub starts
Oct  9 09:38:47 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub ok
Oct  9 09:38:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct  9 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987930298s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.825469971s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986706734s) [0] async=[0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.824310303s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:47 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:48 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  9 09:38:48 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  9 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff94001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct  9 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:49 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Oct  9 09:38:49 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Oct  9 09:38:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:49 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:38:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:38:50 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub starts
Oct  9 09:38:50 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub ok
Oct  9 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct  9 09:38:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct  9 09:38:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:51.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:51 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:52.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  9 09:38:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  9 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct  9 09:38:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  9 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  9 09:38:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  9 09:38:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:53.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:53 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  9 09:38:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct  9 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:54.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  9 09:38:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  9 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  9 09:38:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct  9 09:38:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Oct  9 09:38:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Oct  9 09:38:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:55 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  9 09:38:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct  9 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000026s ======
Oct  9 09:38:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000026s
Oct  9 09:38:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  9 09:38:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  9 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct  9 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  9 09:38:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  9 09:38:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:38:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:57 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct  9 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:38:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093858 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:38:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Oct  9 09:38:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Oct  9 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct  9 09:38:58 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:58 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  9 09:38:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  9 09:38:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:38:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:38:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:59.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:38:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:59 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct  9 09:39:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct  9 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:00 compute-2 python3.9[21815]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  9 09:39:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  9 09:39:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:01 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  9 09:39:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  9 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct  9 09:39:02 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  9 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:02 compute-2 python3.9[22104]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  9 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:39:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:39:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Oct  9 09:39:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Oct  9 09:39:03 compute-2 python3.9[22256]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  9 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:03 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003af0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  9 09:39:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct  9 09:39:03 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:03 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:04 compute-2 python3.9[22409]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:04.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  9 09:39:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  9 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:04 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  9 09:39:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct  9 09:39:04 compute-2 python3.9[22562]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  9 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:04 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:05.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  9 09:39:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  9 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:05 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct  9 09:39:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  9 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:05 compute-2 python3.9[22715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:06.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Oct  9 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Oct  9 09:39:06 compute-2 python3.9[22868]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:06 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct  9 09:39:06 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  9 09:39:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  9 09:39:06 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:06 compute-2 python3.9[22946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct  9 09:39:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct  9 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:07 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct  9 09:39:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:08 compute-2 python3.9[23099]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  9 09:39:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:39:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:39:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  9 09:39:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  9 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:08 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct  9 09:39:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  9 09:39:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  9 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:08 compute-2 python3.9[23278]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  9 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:08 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949655533s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 active pruub 197.427154541s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:39:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct  9 09:39:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct  9 09:39:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff980045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:09 compute-2 python3.9[23431]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:39:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:10 compute-2 python3.9[23584]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  9 09:39:10 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  9 09:39:10 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  9 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:10 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct  9 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393718719s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 active pruub 199.428039551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  9 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:10 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:11 compute-2 python3.9[23737]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct  9 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604859352s) [1] async=[1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 40'1059 active pruub 200.039978027s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct  9 09:39:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct  9 09:39:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:11.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:11 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  9 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct  9 09:39:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct  9 09:39:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct  9 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:12 compute-2 python3.9[23892]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:13 compute-2 python3.9[24044]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct  9 09:39:13 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995595932s) [1] async=[1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 40'1059 active pruub 201.442153931s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:13 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  9 09:39:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  9 09:39:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:13 compute-2 python3.9[24122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:13 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:14 compute-2 python3.9[24275]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:39:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct  9 09:39:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub starts
Oct  9 09:39:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub ok
Oct  9 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:14 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:14 compute-2 python3.9[24354]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:39:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:14 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:15 compute-2 python3.9[24506]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Oct  9 09:39:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Oct  9 09:39:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:16.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  9 09:39:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  9 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:16 compute-2 python3.9[24659]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct  9 09:39:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  9 09:39:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  9 09:39:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  9 09:39:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:17 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:17 compute-2 python3.9[24811]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  9 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:18 compute-2 python3.9[24962]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:18.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  9 09:39:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Oct  9 09:39:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Oct  9 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0042b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  9 09:39:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct  9 09:39:19 compute-2 python3.9[25115]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:19 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  9 09:39:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  9 09:39:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  9 09:39:19 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Oct  9 09:39:19 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  9 09:39:19 compute-2 systemd[1]: tuned.service: Consumed 271ms CPU time, 19.1M memory peak, read 4.0M from disk, written 16.0K to disk.
Oct  9 09:39:19 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  9 09:39:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  9 09:39:19 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  9 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:19 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:20 compute-2 python3.9[25278]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  9 09:39:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:20.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  9 09:39:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  9 09:39:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  9 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct  9 09:39:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  9 09:39:21 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Oct  9 09:39:21 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Oct  9 09:39:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:21 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:22.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  9 09:39:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  9 09:39:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  9 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:22 compute-2 python3.9[25433]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  9 09:39:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct  9 09:39:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  9 09:39:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  9 09:39:23 compute-2 python3.9[25587]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:23 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:23 compute-2 systemd[1]: session-22.scope: Deactivated successfully.
Oct  9 09:39:23 compute-2 systemd[1]: session-22.scope: Consumed 49.584s CPU time.
Oct  9 09:39:23 compute-2 systemd-logind[800]: Session 22 logged out. Waiting for processes to exit.
Oct  9 09:39:23 compute-2 systemd-logind[800]: Removed session 22.
Oct  9 09:39:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  9 09:39:24 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct  9 09:39:24 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct  9 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:25 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  9 09:39:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct  9 09:39:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:25 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  9 09:39:25 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  9 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:25 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct  9 09:39:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:26.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  9 09:39:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  9 09:39:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  9 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004350 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct  9 09:39:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct  9 09:39:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct  9 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:27 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct  9 09:39:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  9 09:39:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  9 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:28 compute-2 systemd-logind[800]: New session 23 of user zuul.
Oct  9 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:28 compute-2 systemd[1]: Started Session 23 of User zuul.
Oct  9 09:39:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct  9 09:39:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct  9 09:39:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct  9 09:39:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:29 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0043e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:29 compute-2 python3.9[25799]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:39:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct  9 09:39:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct  9 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:30 compute-2 python3.9[25957]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  9 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  9 09:39:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  9 09:39:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:39:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:39:31 compute-2 python3.9[26110]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:31 compute-2 python3.9[26195]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 09:39:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:39:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:39:32 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  9 09:39:32 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  9 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004400 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  9 09:39:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  9 09:39:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  9 09:39:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct  9 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:39:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:33 compute-2 python3.9[26349]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct  9 09:39:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct  9 09:39:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  9 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  9 09:39:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  9 09:39:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  9 09:39:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct  9 09:39:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:35.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:35 compute-2 python3.9[26504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:35 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:36 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct  9 09:39:36 compute-2 python3.9[26658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  9 09:39:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  9 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:39:36 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  9 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:36 compute-2 python3.9[26811]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  9 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:37 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct  9 09:39:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  9 09:39:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  9 09:39:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  9 09:39:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  9 09:39:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:37 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:37 compute-2 python3.9[26961]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:38 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct  9 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:38 compute-2 python3.9[27121]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct  9 09:39:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  9 09:39:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:39 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:39 compute-2 python3.9[27275]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:40 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  9 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:41 compute-2 python3.9[27563]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:39:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  9 09:39:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct  9 09:39:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:41 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:41 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:41 compute-2 python3.9[27714]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093942 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:39:42 compute-2 python3.9[27868]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  9 09:39:42 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct  9 09:39:42 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:42 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  9 09:39:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:43 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct  9 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:43 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:43 compute-2 python3.9[28023]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:39:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  9 09:39:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.335654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784335681, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3413, "num_deletes": 251, "total_data_size": 7306794, "memory_usage": 7417192, "flush_reason": "Manual Compaction"}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784347904, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4787675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7214, "largest_seqno": 10622, "table_properties": {"data_size": 4771843, "index_size": 10214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4549, "raw_key_size": 42141, "raw_average_key_size": 23, "raw_value_size": 4736781, "raw_average_value_size": 2625, "num_data_blocks": 444, "num_entries": 1804, "num_filter_entries": 1804, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002664, "oldest_key_time": 1760002664, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12331 microseconds, and 8654 cpu microseconds.
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.347985) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4787675 bytes OK
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348023) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348355) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348366) EVENT_LOG_v1 {"time_micros": 1760002784348363, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348381) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7289936, prev total WAL file size 7289936, number of live WAL files 2.
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.349669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4675KB)], [18(12MB)]
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784349732, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18169717, "oldest_snapshot_seqno": -1}
Oct  9 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3978 keys, 14451597 bytes, temperature: kUnknown
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784386605, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14451597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14418880, "index_size": 21663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 101461, "raw_average_key_size": 25, "raw_value_size": 14339929, "raw_average_value_size": 3604, "num_data_blocks": 936, "num_entries": 3978, "num_filter_entries": 3978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387046) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14451597 bytes
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387437) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 488.5 rd, 388.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.8 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4502, records dropped: 524 output_compression: NoCompression
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387450) EVENT_LOG_v1 {"time_micros": 1760002784387444, "job": 8, "event": "compaction_finished", "compaction_time_micros": 37192, "compaction_time_cpu_micros": 25510, "output_level": 6, "num_output_files": 1, "total_output_size": 14451597, "num_input_records": 4502, "num_output_records": 3978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784388229, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784389864, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.349610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:39:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct  9 09:39:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:45 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 09:39:45 compute-2 python3.9[28177]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct  9 09:39:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 09:39:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 09:39:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:46 compute-2 python3.9[28332]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  9 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0044c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0044c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct  9 09:39:47 compute-2 systemd[1]: session-23.scope: Deactivated successfully.
Oct  9 09:39:47 compute-2 systemd[1]: session-23.scope: Consumed 13.314s CPU time.
Oct  9 09:39:47 compute-2 systemd-logind[800]: Session 23 logged out. Waiting for processes to exit.
Oct  9 09:39:47 compute-2 systemd-logind[800]: Removed session 23.
Oct  9 09:39:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:47 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:39:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:48.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:49 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03b420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:51 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:52 compute-2 systemd-logind[800]: New session 24 of user zuul.
Oct  9 09:39:52 compute-2 systemd[1]: Started Session 24 of User zuul.
Oct  9 09:39:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:39:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093952 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:52 compute-2 python3.9[28623]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:39:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:53 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:53 compute-2 python3.9[28777]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:54.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:54 compute-2 python3.9[28972]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:55 compute-2 systemd[1]: session-24.scope: Deactivated successfully.
Oct  9 09:39:55 compute-2 systemd[1]: session-24.scope: Consumed 1.782s CPU time.
Oct  9 09:39:55 compute-2 systemd-logind[800]: Session 24 logged out. Waiting for processes to exit.
Oct  9 09:39:55 compute-2 systemd-logind[800]: Removed session 24.
Oct  9 09:39:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:55 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:57 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:39:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:39:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:59.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:39:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:59 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:40:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:40:00 compute-2 systemd[1]: Starting system activity accounting tool...
Oct  9 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:40:00 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 09:40:00 compute-2 systemd[1]: Finished system activity accounting tool.
Oct  9 09:40:00 compute-2 systemd-logind[800]: New session 25 of user zuul.
Oct  9 09:40:00 compute-2 systemd[1]: Started Session 25 of User zuul.
Oct  9 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:00 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct  9 09:40:01 compute-2 python3.9[29184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:01 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:01 compute-2 python3.9[29339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:02.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:02 compute-2 python3.9[29496]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy ignored for local
Oct  9 09:40:02 compute-2 kernel: ganesha.nfsd[21367]: segfault at 50 ip 00007f0039af032e sp 00007efffdffa210 error 4 in libntirpc.so.5.8[7f0039ad5000+2c000] likely on CPU 0 (core 0, socket 0)
Oct  9 09:40:02 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:40:02 compute-2 systemd[1]: Started Process Core Dump (PID 29509/UID 0).
Oct  9 09:40:03 compute-2 python3.9[29582]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:03.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:04 compute-2 systemd-coredump[29523]: Process 21005 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 47:#012#0  0x00007f0039af032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:40:04 compute-2 systemd[1]: systemd-coredump@1-29509-0.service: Deactivated successfully.
Oct  9 09:40:04 compute-2 systemd[1]: systemd-coredump@1-29509-0.service: Consumed 1.133s CPU time.
Oct  9 09:40:04 compute-2 podman[29592]: 2025-10-09 09:40:04.135443856 +0000 UTC m=+0.024765554 container died 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:40:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c-merged.mount: Deactivated successfully.
Oct  9 09:40:04 compute-2 podman[29592]: 2025-10-09 09:40:04.154484892 +0000 UTC m=+0.043806568 container remove 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:40:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct  9 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.293s CPU time.
Oct  9 09:40:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:04 compute-2 python3.9[29776]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:05.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:05 compute-2 python3.9[29972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:06.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:06 compute-2 python3.9[30125]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:40:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:07 compute-2 python3.9[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094007 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:07 compute-2 python3.9[30365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:08 compute-2 python3.9[30518]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:08 compute-2 python3.9[30597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094008 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/094008 (4) : haproxy version is 2.3.17-d1c9119
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/094008 (4) : path to executable is /usr/local/sbin/haproxy
Oct  9 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [ALERT] 281/094008 (4) : backend 'backend' has no server available!
Oct  9 09:40:09 compute-2 python3.9[30774]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:09 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:40:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:09.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:09 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:40:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:09 compute-2 python3.9[30927]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:10 compute-2 python3.9[31080]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:10 compute-2 python3.9[31233]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:11 compute-2 python3.9[31385]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094012 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:13 compute-2 python3.9[31540]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:13.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:13 compute-2 python3.9[31695]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:40:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:14 compute-2 python3.9[31847]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:40:14 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 2.
Oct  9 09:40:14 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:40:14 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.293s CPU time.
Oct  9 09:40:14 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:40:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.572919541 +0000 UTC m=+0.031569438 container create c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.613105473 +0000 UTC m=+0.071755359 container init c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.62089627 +0000 UTC m=+0.079546157 container start c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 09:40:14 compute-2 bash[31934]: c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f
Oct  9 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.561214836 +0000 UTC m=+0.019864733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:40:14 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:15 compute-2 python3.9[32094]: ansible-service_facts Invoked
Oct  9 09:40:15 compute-2 network[32111]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:40:15 compute-2 network[32112]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:40:15 compute-2 network[32113]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:40:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:16.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000025s ======
Oct  9 09:40:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000025s
Oct  9 09:40:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:19 compute-2 python3.9[32573]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:40:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:21.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:21 compute-2 python3.9[32728]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  9 09:40:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:22.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:23 compute-2 python3.9[32881]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:23 compute-2 python3.9[32960]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:24 compute-2 python3.9[33112]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:24 compute-2 python3.9[33191]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:26 compute-2 python3.9[33344]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:26.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:26 compute-2 systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Oct  9 09:40:26 compute-2 systemd[1]: session-3.scope: Deactivated successfully.
Oct  9 09:40:26 compute-2 systemd[1]: session-3.scope: Consumed 6.265s CPU time.
Oct  9 09:40:26 compute-2 systemd-logind[800]: Removed session 3.
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:27.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:27 compute-2 python3.9[33511]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:27 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:28.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:28 compute-2 python3.9[33596]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094028 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:29 compute-2 systemd[1]: session-25.scope: Deactivated successfully.
Oct  9 09:40:29 compute-2 systemd[1]: session-25.scope: Consumed 17.742s CPU time.
Oct  9 09:40:29 compute-2 systemd-logind[800]: Session 25 logged out. Waiting for processes to exit.
Oct  9 09:40:29 compute-2 systemd-logind[800]: Removed session 25.
Oct  9 09:40:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:29.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct  9 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:29 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328002f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:31 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:32 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:32 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:33 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:40:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:40:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:34 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:34 compute-2 systemd-logind[800]: New session 26 of user zuul.
Oct  9 09:40:34 compute-2 systemd[1]: Started Session 26 of User zuul.
Oct  9 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:34 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:35 compute-2 python3.9[33810]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:35.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:35 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:35 compute-2 python3.9[33963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:36 compute-2 python3.9[34041]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:36 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:36 compute-2 systemd[1]: session-26.scope: Deactivated successfully.
Oct  9 09:40:36 compute-2 systemd[1]: session-26.scope: Consumed 1.153s CPU time.
Oct  9 09:40:36 compute-2 systemd-logind[800]: Session 26 logged out. Waiting for processes to exit.
Oct  9 09:40:36 compute-2 systemd-logind[800]: Removed session 26.
Oct  9 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:36 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:37 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:40:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:38.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:38 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:38 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:39 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:40 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:40 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:41 compute-2 systemd-logind[800]: New session 27 of user zuul.
Oct  9 09:40:41 compute-2 systemd[1]: Started Session 27 of User zuul.
Oct  9 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:41 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:42.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:42 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:42 compute-2 python3.9[34227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:42 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:43 compute-2 python3.9[34383]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:43.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:43 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:44 compute-2 python3.9[34559]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:44.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:44 compute-2 python3.9[34638]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.t975grzf recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:44 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:44 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334001ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:45 compute-2 python3.9[34790]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:45 compute-2 python3.9[34868]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ecbsxskw recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:45 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00a820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:46 compute-2 python3.9[35021]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:46 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00a820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:46 compute-2 python3.9[35174]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:46 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:47 compute-2 python3.9[35252]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:47.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:47 compute-2 python3.9[35404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:47 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33340029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:47 compute-2 python3.9[35483]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:40:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:40:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:40:48 compute-2 python3.9[35636]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:48 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:48 compute-2 python3.9[35813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:48 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:49 compute-2 python3.9[35891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:49 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:49 compute-2 python3.9[36044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:50 compute-2 python3.9[36122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:50 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33340029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:50 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:51 compute-2 python3.9[36275]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:51 compute-2 systemd[1]: Reloading.
Oct  9 09:40:51 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:40:51 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:40:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000999994s ======
Oct  9 09:40:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:51.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999994s
Oct  9 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:51 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:51 compute-2 python3.9[36465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:52 compute-2 python3.9[36543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:52 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:52 compute-2 python3.9[36696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:52 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:52 compute-2 python3.9[36774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:53.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:53 compute-2 python3.9[36926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:40:53 compute-2 systemd[1]: Reloading.
Oct  9 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:53 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:53 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:40:53 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:53 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:40:53 compute-2 systemd[9018]: Created slice User Background Tasks Slice.
Oct  9 09:40:53 compute-2 systemd[9018]: Starting Cleanup of User's Temporary Files and Directories...
Oct  9 09:40:53 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:40:53 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:40:53 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:40:53 compute-2 systemd[9018]: Finished Cleanup of User's Temporary Files and Directories.
Oct  9 09:40:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:54.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:54 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:54 compute-2 python3.9[37120]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:40:54 compute-2 network[37137]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:40:54 compute-2 network[37138]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:40:54 compute-2 network[37139]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:54 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:55 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:56 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:56 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:57.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:57 compute-2 python3.9[37485]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:57 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:57 compute-2 python3.9[37564]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:58 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:58 compute-2 python3.9[37717]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:58 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:59 compute-2 python3.9[37869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:40:59 compute-2 python3.9[37947]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:40:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:40:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:40:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:40:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:59 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:00 compute-2 python3.9[38101]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  9 09:41:00 compute-2 systemd[1]: Starting Time & Date Service...
Oct  9 09:41:00 compute-2 systemd[1]: Started Time & Date Service.
Oct  9 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:00 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:00 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:01 compute-2 python3.9[38257]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:01.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:01 compute-2 python3.9[38409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:01 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:01 compute-2 python3.9[38488]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:02 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:02 compute-2 python3.9[38666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:02 compute-2 python3.9[38744]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n3x6eb4v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:41:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:02 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:03 compute-2 python3.9[38896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:03.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:03 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:03 compute-2 python3.9[38975]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:04.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:04 compute-2 python3.9[39128]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:04 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:04 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:04 compute-2 python3[39281]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:41:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:05.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:05 compute-2 python3.9[39433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:05 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:05 compute-2 python3.9[39512]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:06.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:06 compute-2 python3.9[39665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:06 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:06 compute-2 python3.9[39743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:06 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334003b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:07 compute-2 python3.9[39895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:07.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:07 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:07 compute-2 python3.9[39973]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:08 compute-2 python3.9[40126]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:08.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:08 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:08 compute-2 python3.9[40205]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:08 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:09 compute-2 python3.9[40382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:09 compute-2 python3.9[40460]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:09 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:10 compute-2 python3.9[40613]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:10 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:10 compute-2 python3.9[40769]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:10 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:11 compute-2 python3.9[40921]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:11.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:11 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:11 compute-2 python3.9[41074]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:12.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:12 compute-2 python3.9[41227]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:12 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:12 compute-2 python3.9[41379]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  9 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:12 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:13 compute-2 systemd[1]: session-27.scope: Deactivated successfully.
Oct  9 09:41:13 compute-2 systemd[1]: session-27.scope: Consumed 20.682s CPU time.
Oct  9 09:41:13 compute-2 systemd-logind[800]: Session 27 logged out. Waiting for processes to exit.
Oct  9 09:41:13 compute-2 systemd-logind[800]: Removed session 27.
Oct  9 09:41:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:13.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:13 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:15 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:16 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480027a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:16 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:17 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:17 compute-2 systemd-logind[800]: New session 28 of user zuul.
Oct  9 09:41:17 compute-2 systemd[1]: Started Session 28 of User zuul.
Oct  9 09:41:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:18 compute-2 python3.9[41566]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  9 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:18 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:18 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:19 compute-2 python3.9[41718]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:19.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:19 compute-2 python3.9[41872]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  9 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:19 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:20 compute-2 python3.9[42025]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.5bjza0nn follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:20.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:20 compute-2 python3.9[42151]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.5bjza0nn mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002879.7659876-104-31828867044063/.source.5bjza0nn _original_basename=.2dnpcni6 follow=False checksum=231ee42d81be70362d898b48675a8dc8dc6887b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:21.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:21 compute-2 python3.9[42303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:21 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:22 compute-2 python3.9[42456]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=#012 create=True mode=0644 path=/tmp/ansible.5bjza0nn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:22 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:22 compute-2 python3.9[42609]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.5bjza0nn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:22 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:23.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:23 compute-2 python3.9[42763]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.5bjza0nn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:23 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:23 compute-2 systemd[1]: session-28.scope: Deactivated successfully.
Oct  9 09:41:23 compute-2 systemd[1]: session-28.scope: Consumed 3.859s CPU time.
Oct  9 09:41:23 compute-2 systemd-logind[800]: Session 28 logged out. Waiting for processes to exit.
Oct  9 09:41:23 compute-2 systemd-logind[800]: Removed session 28.
Oct  9 09:41:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:24 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:24 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000999994s ======
Oct  9 09:41:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:25.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999994s
Oct  9 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:25 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f334c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:27.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:27 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:29 compute-2 systemd-logind[800]: New session 29 of user zuul.
Oct  9 09:41:29 compute-2 systemd[1]: Started Session 29 of User zuul.
Oct  9 09:41:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:29.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:29 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:29 compute-2 python3.9[42975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:30.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:30 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct  9 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:30 compute-2 python3.9[43134]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c001340 fd 38 proxy ignored for local
Oct  9 09:41:30 compute-2 kernel: ganesha.nfsd[34075]: segfault at 50 ip 00007f33e572732e sp 00007f3399ffa210 error 4 in libntirpc.so.5.8[7f33e570c000+2c000] likely on CPU 0 (core 0, socket 0)
Oct  9 09:41:30 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct  9 09:41:30 compute-2 systemd[1]: Started Process Core Dump (PID 43161/UID 0).
Oct  9 09:41:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:31.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:31 compute-2 python3.9[43290]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:41:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:32 compute-2 systemd-coredump[43162]: Process 31981 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f33e572732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Oct  9 09:41:32 compute-2 systemd[1]: systemd-coredump@2-43161-0.service: Deactivated successfully.
Oct  9 09:41:32 compute-2 systemd[1]: systemd-coredump@2-43161-0.service: Consumed 1.020s CPU time.
Oct  9 09:41:32 compute-2 podman[43452]: 2025-10-09 09:41:32.146217206 +0000 UTC m=+0.030112942 container died c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:41:32 compute-2 python3.9[43444]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84-merged.mount: Deactivated successfully.
Oct  9 09:41:32 compute-2 podman[43452]: 2025-10-09 09:41:32.167553026 +0000 UTC m=+0.051448763 container remove c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  9 09:41:32 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct  9 09:41:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:32 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct  9 09:41:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:32 compute-2 python3.9[43639]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:33.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:33 compute-2 python3.9[43791]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:33 compute-2 systemd-logind[800]: Session 29 logged out. Waiting for processes to exit.
Oct  9 09:41:33 compute-2 systemd[1]: session-29.scope: Deactivated successfully.
Oct  9 09:41:33 compute-2 systemd[1]: session-29.scope: Consumed 3.010s CPU time.
Oct  9 09:41:33 compute-2 systemd-logind[800]: Removed session 29.
Oct  9 09:41:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:34.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:41:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:35.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:41:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:36.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:41:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:38 compute-2 systemd-logind[800]: New session 30 of user zuul.
Oct  9 09:41:38 compute-2 systemd[1]: Started Session 30 of User zuul.
Oct  9 09:41:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:39 compute-2 python3.9[43975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:40 compute-2 python3.9[44133]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:41:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:41 compute-2 python3.9[44217]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  9 09:41:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:41.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:42 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 3.
Oct  9 09:41:42 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:41:42 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct  9 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.576323776 +0000 UTC m=+0.032591630 container create 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct  9 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct  9 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct  9 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.610781588 +0000 UTC m=+0.067049441 container init 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.616751923 +0000 UTC m=+0.073019777 container start 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 09:41:42 compute-2 bash[44360]: 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99
Oct  9 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.564337038 +0000 UTC m=+0.020604912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct  9 09:41:42 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:42 compute-2 python3.9[44424]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:41:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:43.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:44 compute-2 python3.9[44615]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:41:45 compute-2 python3.9[44765]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:45.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:45 compute-2 python3.9[44916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:41:46 compute-2 systemd-logind[800]: Session 30 logged out. Waiting for processes to exit.
Oct  9 09:41:46 compute-2 systemd[1]: session-30.scope: Deactivated successfully.
Oct  9 09:41:46 compute-2 systemd[1]: session-30.scope: Consumed 4.473s CPU time.
Oct  9 09:41:46 compute-2 systemd-logind[800]: Removed session 30.
Oct  9 09:41:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:48.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:49.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:50.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:51 compute-2 systemd-logind[800]: New session 31 of user zuul.
Oct  9 09:41:51 compute-2 systemd[1]: Started Session 31 of User zuul.
Oct  9 09:41:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:51.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:51 compute-2 python3.9[45125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:41:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct  9 09:41:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:52.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct  9 09:41:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:41:53 compute-2 python3.9[45282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:53 compute-2 python3.9[45435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:54 compute-2 python3.9[45588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:54 compute-2 python3.9[45711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002913.9431949-158-64012592389817/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=d3993d88f699999b71af21c3d560a684811602ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:55 compute-2 python3.9[45863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:55.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:55 compute-2 python3.9[45987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.0606902-158-211409469481745/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=7fbde074fa214bc5bd2f230fec0e2b862212f741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:56.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:56 compute-2 python3.9[46140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:56 compute-2 python3.9[46263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.9632611-158-219496370782421/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=38a926159765120eafd851814c946f414ec424b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:57 compute-2 python3.9[46415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:57 compute-2 python3.9[46568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:41:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:41:58 compute-2 python3.9[46720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:41:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:58.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:41:58 compute-2 python3.9[46844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002917.8569152-345-64477424480336/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=2fab77a7a903d443f6dce5fe29730068e168b602 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:59 compute-2 python3.9[46996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:41:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:41:59 compute-2 python3.9[47119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002918.7416267-345-57949908334541/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:41:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:41:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:41:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:41:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:41:59 compute-2 python3.9[47272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:00 compute-2 python3.9[47396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002919.5785768-345-19621687433874/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=6410d67b3938ea761bab7ec9350e6d4e3cd79110 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:00 compute-2 python3.9[47548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:01 compute-2 python3.9[47700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:01 compute-2 python3.9[47853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:02 compute-2 python3.9[47976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.4579563-524-222612344865110/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=980dd90580b4bebfd9eff0e377343cee4f9b8b85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:02.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:02 compute-2 python3.9[48208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:03 compute-2 python3.9[48331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002922.3096123-524-78572849494310/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:42:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:03.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:03 compute-2 python3.9[48483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:03 compute-2 python3.9[48607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002923.1461365-524-118474943956971/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=52d07288a3525b8d0f28767e1a1ccba8d4ceb4ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:04.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:04 compute-2 python3.9[48760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:05 compute-2 python3.9[48912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:05 compute-2 python3.9[49036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002924.9218068-727-162718140527839/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:42:06 compute-2 python3.9[49213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:42:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:06.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:42:06 compute-2 python3.9[49366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:07 compute-2 python3.9[49489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002926.3450406-801-206082708221815/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:07 compute-2 python3.9[49642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:08 compute-2 python3.9[49794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:08 compute-2 python3.9[49918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002927.7935505-877-107663192395817/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:09 compute-2 python3.9[50095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:09 compute-2 python3.9[50248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:10 compute-2 python3.9[50371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002929.286838-952-21625768280456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:10 compute-2 python3.9[50524]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:11 compute-2 python3.9[50676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:11 compute-2 python3.9[50799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002930.7719016-1028-242452959543648/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:12 compute-2 python3.9[50952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:12.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:12 compute-2 python3.9[51105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:13 compute-2 python3.9[51228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002932.2727973-1102-231805370009622/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:13 compute-2 systemd[1]: session-31.scope: Deactivated successfully.
Oct  9 09:42:13 compute-2 systemd[1]: session-31.scope: Consumed 17.084s CPU time.
Oct  9 09:42:13 compute-2 systemd-logind[800]: Session 31 logged out. Waiting for processes to exit.
Oct  9 09:42:13 compute-2 systemd-logind[800]: Removed session 31.
Oct  9 09:42:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:18 compute-2 systemd-logind[800]: New session 32 of user zuul.
Oct  9 09:42:18 compute-2 systemd[1]: Started Session 32 of User zuul.
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:19 compute-2 python3.9[51414]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:19.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:19 compute-2 python3.9[51567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:20 compute-2 python3.9[51690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002939.2799282-64-136410422414060/.source.conf _original_basename=ceph.conf follow=False checksum=8b7272e0630e6cb598e773121c6b56dda1c87bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:20 compute-2 python3.9[51843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:21 compute-2 python3.9[51966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002940.4067373-64-194490064068930/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:21.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:21 compute-2 systemd[1]: session-32.scope: Deactivated successfully.
Oct  9 09:42:21 compute-2 systemd[1]: session-32.scope: Consumed 2.089s CPU time.
Oct  9 09:42:21 compute-2 systemd-logind[800]: Session 32 logged out. Waiting for processes to exit.
Oct  9 09:42:21 compute-2 systemd-logind[800]: Removed session 32.
Oct  9 09:42:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:23.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:26 compute-2 systemd[1]: Starting dnf makecache...
Oct  9 09:42:26 compute-2 systemd-logind[800]: New session 33 of user zuul.
Oct  9 09:42:26 compute-2 systemd[1]: Started Session 33 of User zuul.
Oct  9 09:42:26 compute-2 dnf[51999]: Metadata cache refreshed recently.
Oct  9 09:42:26 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  9 09:42:26 compute-2 systemd[1]: Finished dnf makecache.
Oct  9 09:42:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:27 compute-2 python3.9[52151]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094227 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [ALERT] 281/094227 (4) : backend 'backend' has no server available!
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:28 compute-2 python3.9[52308]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:28 compute-2 python3.9[52461]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:29 compute-2 python3.9[52636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:29 compute-2 python3.9[52789]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:42:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:31 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct  9 09:42:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:31 compute-2 python3.9[52949]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:42:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:32 compute-2 python3.9[53034]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:42:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:34 compute-2 python3.9[53189]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:42:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:34 compute-2 python3[53345]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  9 09:42:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:35 compute-2 python3.9[53497]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:35 compute-2 python3.9[53650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-2 python3.9[53728]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:36 compute-2 python3.9[53881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:37 compute-2 python3.9[53959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hbg46kms recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:37 compute-2 python3.9[54111]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:37 compute-2 python3.9[54190]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:38 compute-2 python3.9[54343]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:39 compute-2 python3[54496]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:42:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:39 compute-2 python3.9[54649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:40 compute-2 python3.9[54775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002959.3904185-433-33693085918159/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:40 compute-2 python3.9[54927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:41 compute-2 python3.9[55052]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002960.508045-478-137696161991167/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:41 compute-2 python3.9[55205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:42.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:42 compute-2 python3.9[55331]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002961.6267846-523-140967059245984/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:42 compute-2 python3.9[55483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:43 compute-2 python3.9[55608]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002962.576975-568-14409729099319/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:43.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:43 compute-2 python3.9[55761]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:44 compute-2 python3.9[55887]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002963.5476701-613-9988118755186/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:45 compute-2 python3.9[56039]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:45.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:45 compute-2 python3.9[56191]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:46 compute-2 python3.9[56348]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:46 compute-2 python3.9[56500]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:47.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:47 compute-2 python3.9[56653]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:48 compute-2 python3.9[56808]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:48 compute-2 python3.9[56964]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:49.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:49 compute-2 python3.9[57140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:42:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:50 compute-2 python3.9[57294]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:50 compute-2 ovs-vsctl[57295]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:51 compute-2 python3.9[57447]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:51 compute-2 python3.9[57603]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:42:51 compute-2 ovs-vsctl[57604]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  9 09:42:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:52.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:52 compute-2 python3.9[57755]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:42:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:52 compute-2 python3.9[57909]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:53 compute-2 python3.9[58061]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:53 compute-2 python3.9[58140]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:42:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:54.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:42:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:54 compute-2 python3.9[58293]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:54 compute-2 python3.9[58371]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:42:55 compute-2 python3.9[58523]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:55 compute-2 python3.9[58676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:42:56 compute-2 python3.9[58754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:56 compute-2 python3.9[58907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:57 compute-2 python3.9[58985]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:57 compute-2 python3.9[59137]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:42:57 compute-2 systemd[1]: Reloading.
Oct  9 09:42:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:57 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:42:57 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:42:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:58.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:58 compute-2 python3.9[59327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:58 compute-2 python3.9[59405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:59 compute-2 python3.9[59557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:42:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:42:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:42:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:42:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:59.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:42:59 compute-2 python3.9[59635]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:42:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:42:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:00 compute-2 python3.9[59788]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:00 compute-2 systemd[1]: Reloading.
Oct  9 09:43:00 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:00 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:00.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:00 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:43:00 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:00 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:00 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:43:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:01 compute-2 python3.9[59982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:01 compute-2 python3.9[60134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:01 compute-2 python3.9[60258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002981.2392247-1366-221275307511219/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:02.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:02 compute-2 python3.9[60411]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:03 compute-2 python3.9[60563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:03 compute-2 python3.9[60687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002983.0441544-1441-29288235349591/.source.json _original_basename=.91twzejg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-2 python3.9[60839]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:04.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:05.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:05 compute-2 python3.9[61268]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:06 compute-2 python3.9[61500]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:07 compute-2 python3.9[61652]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:08.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:08 compute-2 python3[61825]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:43:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.123781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991123825, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2315, "num_deletes": 250, "total_data_size": 6190411, "memory_usage": 6285112, "flush_reason": "Manual Compaction"}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991130058, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2417462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10628, "largest_seqno": 12937, "table_properties": {"data_size": 2410776, "index_size": 3500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 2395934, "raw_average_value_size": 2852, "num_data_blocks": 156, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002784, "oldest_key_time": 1760002784, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6325 microseconds, and 4379 cpu microseconds.
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130110) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2417462 bytes OK
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130129) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130499) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130513) EVENT_LOG_v1 {"time_micros": 1760002991130510, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130527) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6180268, prev total WAL file size 6180268, number of live WAL files 2.
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131968) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2360KB)], [21(13MB)]
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991132158, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16869059, "oldest_snapshot_seqno": -1}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4397 keys, 14823754 bytes, temperature: kUnknown
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991172105, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14823754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14789596, "index_size": 22080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110454, "raw_average_key_size": 25, "raw_value_size": 14704682, "raw_average_value_size": 3344, "num_data_blocks": 954, "num_entries": 4397, "num_filter_entries": 4397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172449) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14823754 bytes
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 422.0 rd, 370.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.1) OK, records in: 4818, records dropped: 421 output_compression: NoCompression
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172959) EVENT_LOG_v1 {"time_micros": 1760002991172952, "job": 10, "event": "compaction_finished", "compaction_time_micros": 39975, "compaction_time_cpu_micros": 20579, "output_level": 6, "num_output_files": 1, "total_output_size": 14823754, "num_input_records": 4818, "num_output_records": 4397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991173449, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991175029, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:11.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:14 compute-2 podman[61836]: 2025-10-09 09:43:14.106464503 +0000 UTC m=+5.106289766 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-2 podman[61990]: 2025-10-09 09:43:14.197594018 +0000 UTC m=+0.028537519 container create 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct  9 09:43:14 compute-2 podman[61990]: 2025-10-09 09:43:14.184574214 +0000 UTC m=+0.015517725 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-2 python3[61825]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  9 09:43:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:14.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.434529) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994434554, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 122966, "memory_usage": 129416, "flush_reason": "Manual Compaction"}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct  9 09:43:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994436695, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 80942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12938, "largest_seqno": 13227, "table_properties": {"data_size": 79032, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75309, "raw_average_value_size": 278, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002991, "oldest_key_time": 1760002991, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 2428 microseconds, and 584 cpu microseconds.
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436718) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 80942 bytes OK
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436974) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437384) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437396) EVENT_LOG_v1 {"time_micros": 1760002994437393, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120817, prev total WAL file size 120817, number of live WAL files 2.
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437892) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(14MB)]
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994437930, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14904696, "oldest_snapshot_seqno": -1}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4157 keys, 11557943 bytes, temperature: kUnknown
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994468811, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11557943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11527062, "index_size": 19379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106405, "raw_average_key_size": 25, "raw_value_size": 11448005, "raw_average_value_size": 2753, "num_data_blocks": 828, "num_entries": 4157, "num_filter_entries": 4157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468958) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11557943 bytes
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 481.9 rd, 373.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(326.9) write-amplify(142.8) OK, records in: 4667, records dropped: 510 output_compression: NoCompression
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469333) EVENT_LOG_v1 {"time_micros": 1760002994469326, "job": 12, "event": "compaction_finished", "compaction_time_micros": 30932, "compaction_time_cpu_micros": 17090, "output_level": 6, "num_output_files": 1, "total_output_size": 11557943, "num_input_records": 4667, "num_output_records": 4157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994469427, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994470949, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:43:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:14 compute-2 python3.9[62170]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:15 compute-2 python3.9[62324]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:15.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:15 compute-2 python3.9[62401]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:16 compute-2 python3.9[62553]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002995.8268647-1705-134547807455674/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:16 compute-2 python3.9[62629]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:43:16 compute-2 systemd[1]: Reloading.
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:16 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:16 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-2 python3.9[62739]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:17 compute-2 systemd[1]: Reloading.
Oct  9 09:43:17 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:17 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:17.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:17 compute-2 systemd[1]: Starting ovn_controller container...
Oct  9 09:43:17 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:43:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1132d56d96b82cea4f762b28bfa85a63cfdf43d1885ea300deef7166cf60aaf3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  9 09:43:17 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460.
Oct  9 09:43:17 compute-2 podman[62782]: 2025-10-09 09:43:17.721253125 +0000 UTC m=+0.078741962 container init 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:43:17 compute-2 ovn_controller[62794]: + sudo -E kolla_set_configs
Oct  9 09:43:17 compute-2 podman[62782]: 2025-10-09 09:43:17.739289809 +0000 UTC m=+0.096778626 container start 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  9 09:43:17 compute-2 edpm-start-podman-container[62782]: ovn_controller
Oct  9 09:43:17 compute-2 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:43:17 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:43:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:17 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:43:17 compute-2 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:43:17 compute-2 edpm-start-podman-container[62781]: Creating additional drop-in dependency for "ovn_controller" (2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460)
Oct  9 09:43:17 compute-2 podman[62801]: 2025-10-09 09:43:17.805352368 +0000 UTC m=+0.057909116 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:43:17 compute-2 systemd[1]: Reloading.
Oct  9 09:43:17 compute-2 systemd[62820]: Queued start job for default target Main User Target.
Oct  9 09:43:17 compute-2 systemd[62820]: Created slice User Application Slice.
Oct  9 09:43:17 compute-2 systemd[62820]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:43:17 compute-2 systemd[62820]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:17 compute-2 systemd[62820]: Reached target Paths.
Oct  9 09:43:17 compute-2 systemd[62820]: Reached target Timers.
Oct  9 09:43:17 compute-2 systemd[62820]: Starting D-Bus User Message Bus Socket...
Oct  9 09:43:17 compute-2 systemd[62820]: Starting Create User's Volatile Files and Directories...
Oct  9 09:43:17 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:17 compute-2 systemd[62820]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:43:17 compute-2 systemd[62820]: Reached target Sockets.
Oct  9 09:43:17 compute-2 systemd[62820]: Finished Create User's Volatile Files and Directories.
Oct  9 09:43:17 compute-2 systemd[62820]: Reached target Basic System.
Oct  9 09:43:17 compute-2 systemd[62820]: Reached target Main User Target.
Oct  9 09:43:17 compute-2 systemd[62820]: Startup finished in 102ms.
Oct  9 09:43:17 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:18 compute-2 systemd[1]: Started User Manager for UID 0.
Oct  9 09:43:18 compute-2 systemd[1]: Started ovn_controller container.
Oct  9 09:43:18 compute-2 systemd[1]: 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460-67d7d7bdc0cacc96.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:43:18 compute-2 systemd[1]: 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460-67d7d7bdc0cacc96.service: Failed with result 'exit-code'.
Oct  9 09:43:18 compute-2 systemd[1]: Started Session c1 of User root.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Validating config file
Oct  9 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Writing out command to execute
Oct  9 09:43:18 compute-2 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: ++ cat /run_command
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + ARGS=
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + sudo kolla_copy_cacerts
Oct  9 09:43:18 compute-2 systemd[1]: Started Session c2 of User root.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + [[ ! -n '' ]]
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + . kolla_extend_start
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + umask 0022
Oct  9 09:43:18 compute-2 ovn_controller[62794]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  9 09:43:18 compute-2 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1465] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1471] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1478] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1482] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1484] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  9 09:43:18 compute-2 kernel: br-int: entered promiscuous mode
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|pinctrl(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1619] manager: (ovn-fc69d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1623] manager: (ovn-ef2171-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1662] manager: (ovn-1479fb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct  9 09:43:18 compute-2 systemd-udevd[62921]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-2 kernel: genev_sys_6081: entered promiscuous mode
Oct  9 09:43:18 compute-2 systemd-udevd[62922]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1770] device (genev_sys_6081): carrier: link connected
Oct  9 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1773] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct  9 09:43:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:18 compute-2 python3.9[63053]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:18 compute-2 ovs-vsctl[63054]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  9 09:43:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:19 compute-2 python3.9[63206]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-2 ovs-vsctl[63208]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  9 09:43:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:19 compute-2 python3.9[63362]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:43:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:19 compute-2 ovs-vsctl[63363]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  9 09:43:20 compute-2 systemd[1]: session-33.scope: Deactivated successfully.
Oct  9 09:43:20 compute-2 systemd[1]: session-33.scope: Consumed 41.376s CPU time.
Oct  9 09:43:20 compute-2 systemd-logind[800]: Session 33 logged out. Waiting for processes to exit.
Oct  9 09:43:20 compute-2 systemd-logind[800]: Removed session 33.
Oct  9 09:43:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:23.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:24.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:24 compute-2 systemd-logind[800]: New session 35 of user zuul.
Oct  9 09:43:24 compute-2 systemd[1]: Started Session 35 of User zuul.
Oct  9 09:43:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:25 compute-2 python3.9[63547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:26.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:26 compute-2 python3.9[63704]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:27 compute-2 python3.9[63856]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:27 compute-2 python3.9[64008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:28 compute-2 python3.9[64161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-2 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:43:28 compute-2 systemd[62820]: Activating special unit Exit the Session...
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped target Main User Target.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped target Basic System.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped target Paths.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped target Sockets.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped target Timers.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:43:28 compute-2 systemd[62820]: Closed D-Bus User Message Bus Socket.
Oct  9 09:43:28 compute-2 systemd[62820]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:43:28 compute-2 systemd[62820]: Removed slice User Application Slice.
Oct  9 09:43:28 compute-2 systemd[62820]: Reached target Shutdown.
Oct  9 09:43:28 compute-2 systemd[62820]: Finished Exit the Session.
Oct  9 09:43:28 compute-2 systemd[62820]: Reached target Exit the Session.
Oct  9 09:43:28 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-2 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:43:28 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:43:28 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:43:28 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:43:28 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:43:28 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:43:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:28 compute-2 python3.9[64316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:29 compute-2 python3.9[64466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:43:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:29 compute-2 python3.9[64644]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  9 09:43:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:30 compute-2 python3.9[64795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:31 compute-2 python3.9[64916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003010.4273999-220-235911692735263/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:31 compute-2 python3.9[65067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:32 compute-2 python3.9[65189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003011.5769122-265-231101159802754/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:33 compute-2 python3.9[65341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:43:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:33 compute-2 python3.9[65426]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:43:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:35 compute-2 python3.9[65580]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:43:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:36 compute-2 python3.9[65734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:36.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:36 compute-2 python3.9[65857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003015.6655996-376-19616262839221/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:36 compute-2 python3.9[66007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:37 compute-2 python3.9[66128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003016.5473557-376-183098665605930/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:38 compute-2 python3.9[66280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct  9 09:43:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct  9 09:43:38 compute-2 python3.9[66401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003017.976951-508-226953453418663/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:39 compute-2 python3.9[66551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:39 compute-2 python3.9[66672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003018.8595133-508-66796641543675/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:40 compute-2 python3.9[66823]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:43:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:40 compute-2 python3.9[66978]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:41 compute-2 python3.9[67130]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:41 compute-2 python3.9[67208]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:41 compute-2 python3.9[67361]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:42 compute-2 python3.9[67439]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:42 compute-2 python3.9[67592]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:43 compute-2 python3.9[67744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:43 compute-2 python3.9[67822]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:44 compute-2 python3.9[67975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:44 compute-2 python3.9[68054]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:45 compute-2 python3.9[68206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:45 compute-2 systemd[1]: Reloading.
Oct  9 09:43:45 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:45 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:45 compute-2 python3.9[68397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:46 compute-2 python3.9[68475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:46 compute-2 python3.9[68628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:46 compute-2 python3.9[68706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:47 compute-2 python3.9[68858]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:43:47 compute-2 systemd[1]: Reloading.
Oct  9 09:43:47 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:43:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:47 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:43:47 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:43:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:47 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:43:47 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:43:47 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:43:48 compute-2 ovn_controller[62794]: 2025-10-09T09:43:48Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Oct  9 09:43:48 compute-2 ovn_controller[62794]: 2025-10-09T09:43:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct  9 09:43:48 compute-2 podman[69001]: 2025-10-09 09:43:48.23002796 +0000 UTC m=+0.060980065 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:43:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:48 compute-2 python3.9[69076]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:48 compute-2 python3.9[69228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:49 compute-2 python3.9[69376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003028.5918083-962-202310641851947/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:50 compute-2 python3.9[69529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:43:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:50 compute-2 python3.9[69682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:43:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:50 compute-2 python3.9[69805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003030.2235723-1036-15800937930524/.source.json _original_basename=.dx0hf3hb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:51 compute-2 python3.9[69957]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:43:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:51.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct  9 09:43:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct  9 09:43:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:53 compute-2 python3.9[70386]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  9 09:43:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:53 compute-2 python3.9[70539]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:43:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct  9 09:43:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct  9 09:43:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:54 compute-2 python3.9[70692]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:43:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:43:56 compute-2 python3[70864]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:43:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct  9 09:43:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct  9 09:43:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:58.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:43:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:43:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:43:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:43:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:43:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:00.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:03.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:04.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:04 compute-2 podman[70876]: 2025-10-09 09:44:04.521321872 +0000 UTC m=+8.357611900 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-2 podman[70983]: 2025-10-09 09:44:04.616499572 +0000 UTC m=+0.028427864 container create aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  9 09:44:04 compute-2 podman[70983]: 2025-10-09 09:44:04.603854343 +0000 UTC m=+0.015782655 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-2 python3[70864]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:44:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:05 compute-2 python3.9[71163]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:05.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:06 compute-2 python3.9[71318]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:06 compute-2 python3.9[71395]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:06 compute-2 python3.9[71546]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003046.4922872-1300-212613524092654/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:07 compute-2 python3.9[71622]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:07 compute-2 systemd[1]: Reloading.
Oct  9 09:44:07 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:07 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:08 compute-2 python3.9[71734]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:08 compute-2 systemd[1]: Reloading.
Oct  9 09:44:08 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:08 compute-2 systemd[1]: Starting ovn_metadata_agent container...
Oct  9 09:44:08 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:44:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cb402dff26e4d2635338b2d9f3c87774b48de2412b5a737e7b0cd9dd54e99e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cb402dff26e4d2635338b2d9f3c87774b48de2412b5a737e7b0cd9dd54e99e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:44:08 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335.
Oct  9 09:44:08 compute-2 podman[71776]: 2025-10-09 09:44:08.542509982 +0000 UTC m=+0.089556496 container init aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + sudo -E kolla_set_configs
Oct  9 09:44:08 compute-2 podman[71776]: 2025-10-09 09:44:08.564285202 +0000 UTC m=+0.111331696 container start aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 09:44:08 compute-2 edpm-start-podman-container[71776]: ovn_metadata_agent
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Validating config file
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Copying service configuration files
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Writing out command to execute
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: ++ cat /run_command
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + CMD=neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + ARGS=
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + sudo kolla_copy_cacerts
Oct  9 09:44:08 compute-2 podman[71795]: 2025-10-09 09:44:08.626581351 +0000 UTC m=+0.053213757 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:44:08 compute-2 edpm-start-podman-container[71775]: Creating additional drop-in dependency for "ovn_metadata_agent" (aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335)
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + [[ ! -n '' ]]
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + . kolla_extend_start
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: Running command: 'neutron-ovn-metadata-agent'
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + umask 0022
Oct  9 09:44:08 compute-2 ovn_metadata_agent[71788]: + exec neutron-ovn-metadata-agent
Oct  9 09:44:08 compute-2 systemd[1]: Reloading.
Oct  9 09:44:08 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:08 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:08 compute-2 systemd[1]: Started ovn_metadata_agent container.
Oct  9 09:44:09 compute-2 systemd[1]: session-35.scope: Deactivated successfully.
Oct  9 09:44:09 compute-2 systemd[1]: session-35.scope: Consumed 41.766s CPU time.
Oct  9 09:44:09 compute-2 systemd-logind[800]: Session 35 logged out. Waiting for processes to exit.
Oct  9 09:44:09 compute-2 systemd-logind[800]: Removed session 35.
Oct  9 09:44:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.262 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.262 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.274 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c24becb7-a313-4586-a73e-1530a4367da3 (UUID: c24becb7-a313-4586-a73e-1530a4367da3) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.293 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.298 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.302 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c24becb7-a313-4586-a73e-1530a4367da3'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], external_ids={}, name=c24becb7-a313-4586-a73e-1530a4367da3, nb_cfg_timestamp=1760003006161, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.303 71793 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f38807e6af0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.303 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.308 71793 DEBUG oslo_service.service [-] Started child 72001 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.311 72001 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-895417'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.311 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3edxmyi4/privsep.sock']#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.327 72001 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.328 72001 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.328 72001 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.330 72001 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.335 72001 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.339 72001 INFO eventlet.wsgi.server [-] (72001) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  9 09:44:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:10 compute-2 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.847 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.847 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3edxmyi4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.764 72006 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.768 72006 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.769 72006 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.770 72006 INFO oslo.privsep.daemon [-] privsep daemon running as pid 72006#033[00m
Oct  9 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.850 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a70575-8af0-4ee8-8eed-e12c59ebaa37]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:44:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.717 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[af3c2038-a93d-4b73-8057-cad477566a60]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.719 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, column=external_ids, values=({'neutron:ovn-metadata-id': '2b22cda5-e8f4-5cad-b7de-4c4bd08d93f0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.724 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.760 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.760 71793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:44:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:14.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:14 compute-2 systemd-logind[800]: New session 36 of user zuul.
Oct  9 09:44:14 compute-2 systemd[1]: Started Session 36 of User zuul.
Oct  9 09:44:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:15 compute-2 python3.9[72193]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:44:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:16 compute-2 python3.9[72350]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:17 compute-2 python3.9[72512]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:17 compute-2 systemd[1]: Reloading.
Oct  9 09:44:17 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:17 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:18 compute-2 python3.9[72698]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:44:18 compute-2 network[72716]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:44:18 compute-2 network[72717]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:44:18 compute-2 network[72718]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:44:18 compute-2 podman[72723]: 2025-10-09 09:44:18.379669528 +0000 UTC m=+0.067990031 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:44:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:21 compute-2 python3.9[73010]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:22.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:22 compute-2 python3.9[73164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:23 compute-2 python3.9[73317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:44:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:44:23 compute-2 python3.9[73470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:24 compute-2 python3.9[73624]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:24 compute-2 python3.9[73778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:25 compute-2 python3.9[73931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:44:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:26 compute-2 python3.9[74085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:26 compute-2 python3.9[74238]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:27 compute-2 python3.9[74390]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-2 python3.9[74542]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:28 compute-2 python3.9[74695]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:28 compute-2 python3.9[74848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:28 compute-2 python3.9[75000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:29 compute-2 python3.9[75177]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:29 compute-2 python3.9[75330]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:30 compute-2 python3.9[75483]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:30 compute-2 python3.9[75635]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-2 python3.9[75787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:31 compute-2 python3.9[75940]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:32 compute-2 python3.9[76092]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:44:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:32.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:32 compute-2 python3.9[76245]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:33 compute-2 python3.9[76397]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:44:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:34 compute-2 python3.9[76550]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:44:34 compute-2 systemd[1]: Reloading.
Oct  9 09:44:34 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:44:34 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:44:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:35 compute-2 python3.9[76738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-2 python3.9[76891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:35 compute-2 python3.9[77045]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-2 python3.9[77199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:36 compute-2 python3.9[77352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:37 compute-2 python3.9[77505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:37 compute-2 python3.9[77659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:44:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:38 compute-2 podman[77785]: 2025-10-09 09:44:38.745441972 +0000 UTC m=+0.041538030 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Oct  9 09:44:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:38 compute-2 python3.9[77827]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  9 09:44:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:39 compute-2 python3.9[77983]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:44:39 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:39 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:44:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:40 compute-2 python3.9[78144]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:44:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:40.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:41 compute-2 python3.9[78304]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:44:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:41 compute-2 python3.9[78389]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:44:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:44:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:44.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:44:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:46.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:48.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:49 compute-2 podman[78409]: 2025-10-09 09:44:49.225377899 +0000 UTC m=+0.056527806 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:44:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:44:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:51.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:53.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:44:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:44:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:56.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:44:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:44:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:44:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:44:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:44:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:44:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:44:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct  9 09:45:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:03.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:04 compute-2 kernel: SELinux:  Converting 472 SID table entries...
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:04 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct  9 09:45:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct  9 09:45:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:06.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:09 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct  9 09:45:09 compute-2 podman[78667]: 2025-10-09 09:45:09.211897101 +0000 UTC m=+0.042806072 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  9 09:45:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:45:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:11 compute-2 kernel: SELinux:  Converting 472 SID table entries...
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:11 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:13 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct  9 09:45:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:45:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2475 writes, 14K keys, 2475 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s#012Cumulative WAL: 2475 writes, 2475 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2475 writes, 14K keys, 2475 commit groups, 1.0 writes per commit group, ingest: 38.77 MB, 0.06 MB/s#012Interval WAL: 2475 writes, 2475 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    387.5      0.05              0.04         6    0.009       0      0       0.0       0.0#012  L6      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    434.2    375.1      0.17              0.10         5    0.034     19K   2242       0.0       0.0#012 Sum      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    328.7    378.1      0.22              0.14        11    0.020     19K   2242       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    330.1    379.6      0.22              0.14        10    0.022     19K   2242       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    434.2    375.1      0.17              0.10         5    0.034     19K   2242       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    393.9      0.05              0.04         5    0.011       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 2.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(165,2.03 MB,0.668094%) FilterBlock(11,66.42 KB,0.0213372%) IndexBlock(11,134.28 KB,0.0431362%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:45:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:14.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:45:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:45:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:20 compute-2 podman[78901]: 2025-10-09 09:45:20.238743038 +0000 UTC m=+0.061789498 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:45:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:22.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:32.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:34.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:38.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:40 compute-2 podman[95719]: 2025-10-09 09:45:40.200385388 +0000 UTC m=+0.036623841 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:45:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:43.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:46 compute-2 kernel: SELinux:  Converting 473 SID table entries...
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  9 09:45:46 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  9 09:45:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:46 compute-2 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Oct  9 09:45:46 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct  9 09:45:46 compute-2 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Oct  9 09:45:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:51 compute-2 podman[96024]: 2025-10-09 09:45:51.115624965 +0000 UTC m=+0.058748716 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Oct  9 09:45:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:51 compute-2 systemd[1]: Stopping OpenSSH server daemon...
Oct  9 09:45:51 compute-2 systemd[1]: sshd.service: Deactivated successfully.
Oct  9 09:45:51 compute-2 systemd[1]: Stopped OpenSSH server daemon.
Oct  9 09:45:51 compute-2 systemd[1]: sshd.service: Consumed 894ms CPU time, read 2.7M from disk, written 0B to disk.
Oct  9 09:45:51 compute-2 systemd[1]: Stopped target sshd-keygen.target.
Oct  9 09:45:51 compute-2 systemd[1]: Stopping sshd-keygen.target...
Oct  9 09:45:51 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  9 09:45:51 compute-2 systemd[1]: Reached target sshd-keygen.target.
Oct  9 09:45:51 compute-2 systemd[1]: Starting OpenSSH server daemon...
Oct  9 09:45:51 compute-2 systemd[1]: Started OpenSSH server daemon.
Oct  9 09:45:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:52 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:45:52 compute-2 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:45:52 compute-2 systemd[1]: Reloading.
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:53 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:45:53 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:45:53 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:45:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:53 compute-2 systemd[1]: Starting PackageKit Daemon...
Oct  9 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:53 compute-2 systemd[1]: Started PackageKit Daemon.
Oct  9 09:45:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:45:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:45:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:45:58 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:45:58 compute-2 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:45:58 compute-2 systemd[1]: man-db-cache-update.service: Consumed 6.879s CPU time.
Oct  9 09:45:58 compute-2 systemd[1]: run-r499906ddeaca40afaafa798fed3f30ad.service: Deactivated successfully.
Oct  9 09:45:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:45:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:58.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:45:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:45:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:45:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:45:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:45:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:46:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s#012Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  9 09:46:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:08.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:46:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:46:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:46:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:11 compute-2 podman[105261]: 2025-10-09 09:46:11.207762779 +0000 UTC m=+0.041259387 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:46:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  9 09:46:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:46:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:46:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:18.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:46:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:20 compute-2 python3.9[105493]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:20 compute-2 systemd[1]: Reloading.
Oct  9 09:46:20 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:20 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:21 compute-2 podman[105555]: 2025-10-09 09:46:21.224379557 +0000 UTC m=+0.059395956 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:46:21 compute-2 python3.9[105705]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:21 compute-2 systemd[1]: Reloading.
Oct  9 09:46:21 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:21 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.004000040s ======
Oct  9 09:46:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000040s
Oct  9 09:46:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:22 compute-2 python3.9[105897]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:22 compute-2 systemd[1]: Reloading.
Oct  9 09:46:22 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:22 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:23 compute-2 python3.9[106088]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:23 compute-2 systemd[1]: Reloading.
Oct  9 09:46:23 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:23 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:46:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:24 compute-2 python3.9[106304]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:24 compute-2 systemd[1]: Reloading.
Oct  9 09:46:24 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:24 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-2 python3.9[106494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:25 compute-2 systemd[1]: Reloading.
Oct  9 09:46:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:25 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:25 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:26 compute-2 python3.9[106686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:26 compute-2 systemd[1]: Reloading.
Oct  9 09:46:26 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:26 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:27 compute-2 python3.9[106875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:46:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:27.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:46:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:28 compute-2 python3.9[107031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:28 compute-2 systemd[1]: Reloading.
Oct  9 09:46:28 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:28 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:29 compute-2 python3.9[107222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  9 09:46:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:29 compute-2 systemd[1]: Reloading.
Oct  9 09:46:29 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:46:29 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:46:29 compute-2 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  9 09:46:29 compute-2 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  9 09:46:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:29.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:30 compute-2 python3.9[107442]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:31 compute-2 python3.9[107597]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:31 compute-2 python3.9[107752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:32 compute-2 python3.9[107908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:32 compute-2 python3.9[108067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:33 compute-2 python3.9[108222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:33 compute-2 python3.9[108378]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:34 compute-2 python3.9[108534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:35 compute-2 python3.9[108689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:35 compute-2 python3.9[108845]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-2 python3.9[109001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:37 compute-2 python3.9[109156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:37 compute-2 python3.9[109311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:38 compute-2 python3.9[109467]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  9 09:46:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:40.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:40 compute-2 python3.9[109625]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:41 compute-2 python3.9[109777]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:41 compute-2 podman[109901]: 2025-10-09 09:46:41.419750586 +0000 UTC m=+0.038450181 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:46:41 compute-2 python3.9[109945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:42 compute-2 python3.9[110098]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-2 python3.9[110251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:42 compute-2 python3.9[110403]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:43 compute-2 python3.9[110556]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:44 compute-2 python3.9[110682]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003203.2785082-1624-222909275187416/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:44 compute-2 python3.9[110834]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:45 compute-2 python3.9[110959]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003204.4076257-1624-65265225203423/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:45 compute-2 python3.9[111111]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:46 compute-2 python3.9[111237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003205.2692983-1624-255031006266652/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:46 compute-2 python3.9[111390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:46 compute-2 python3.9[111515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.1329062-1624-28827873194475/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:47 compute-2 python3.9[111667]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:47 compute-2 python3.9[111793]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.016855-1624-110603227701467/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:48 compute-2 python3.9[111945]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:48 compute-2 python3.9[112071]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.8874934-1624-98877234257813/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:49 compute-2 python3.9[112223]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:49 compute-2 python3.9[112346]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003208.7556229-1624-85334547532276/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:49 compute-2 python3.9[112499]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:50 compute-2 python3.9[112650]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003209.5846796-1624-275514908684397/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:51 compute-2 python3.9[112802]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  9 09:46:51 compute-2 podman[112927]: 2025-10-09 09:46:51.460325095 +0000 UTC m=+0.055099897 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  9 09:46:51 compute-2 python3.9[112972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:52 compute-2 python3.9[113132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-2 python3.9[113285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:52.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:52 compute-2 python3.9[113437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:53 compute-2 python3.9[113589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:53 compute-2 python3.9[113742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:54 compute-2 python3.9[113894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:54 compute-2 python3.9[114047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:55 compute-2 python3.9[114199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-2 python3.9[114351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:55 compute-2 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  9 09:46:55 compute-2 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  9 09:46:55 compute-2 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  9 09:46:55 compute-2 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  9 09:46:56 compute-2 python3.9[114506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-2 python3.9[114659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:56.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:56 compute-2 python3.9[114811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:57 compute-2 python3.9[114963]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:46:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:46:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:57 compute-2 python3.9[115116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:46:58 compute-2 python3.9[115240]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003217.6217384-2287-187263854618682/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:58 compute-2 python3.9[115392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:59 compute-2 python3.9[115515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003218.4530416-2287-98275886707179/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:46:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:46:59 compute-2 python3.9[115668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:46:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:46:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:46:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:46:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:46:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:00 compute-2 python3.9[115791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003219.295015-2287-71118806312740/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:00 compute-2 python3.9[115944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:00 compute-2 python3.9[116067]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.2104194-2287-20951953278809/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:01 compute-2 python3.9[116219]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:01.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:01 compute-2 python3.9[116343]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.0768297-2287-178335115248352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:02 compute-2 python3.9[116495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:02 compute-2 python3.9[116619]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.908661-2287-264805387401850/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:03 compute-2 python3.9[116771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:03 compute-2 python3.9[116894]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003222.7714226-2287-124579632906047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:03 compute-2 python3.9[117047]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:04 compute-2 python3.9[117171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003223.6110396-2287-108395645424791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:04 compute-2 python3.9[117323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:05 compute-2 python3.9[117446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003224.4478147-2287-198516489822151/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:05 compute-2 python3.9[117599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:06 compute-2 python3.9[117722]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003225.3746257-2287-250846801359665/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:06 compute-2 python3.9[117875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:07 compute-2 python3.9[117998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003226.2927415-2287-212053636094214/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:07 compute-2 python3.9[118150]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:08 compute-2 python3.9[118274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003227.248331-2287-91375763134553/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:08 compute-2 python3.9[118427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:08.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:08 compute-2 python3.9[118550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.1503737-2287-129227188622222/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:09 compute-2 python3.9[118702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:09 compute-2 python3.9[118826]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.9958992-2287-150289708376550/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:09.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:47:10 compute-2 python3.9[119002]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:47:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:47:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:10 compute-2 python3.9[119157]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  9 09:47:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:11.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:12 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  9 09:47:12 compute-2 podman[119287]: 2025-10-09 09:47:12.178829765 +0000 UTC m=+0.037785278 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:47:12 compute-2 python3.9[119332]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:12.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:12 compute-2 python3.9[119484]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:13 compute-2 python3.9[119636]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-2 python3.9[119789]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:14 compute-2 python3.9[119941]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:14 compute-2 python3.9[120094]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:15 compute-2 python3.9[120246]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-2 python3.9[120398]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:16 compute-2 python3.9[120551]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:16 compute-2 python3.9[120704]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:17 compute-2 python3.9[120856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:17 compute-2 systemd[1]: Reloading.
Oct  9 09:47:17 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:17 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon socket...
Oct  9 09:47:17 compute-2 systemd[1]: Listening on libvirt logging daemon socket.
Oct  9 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  9 09:47:17 compute-2 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  9 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon...
Oct  9 09:47:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:17 compute-2 systemd[1]: Started libvirt logging daemon.
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:18 compute-2 python3.9[121051]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:18 compute-2 systemd[1]: Reloading.
Oct  9 09:47:18 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:18 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  9 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  9 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  9 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  9 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  9 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  9 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Oct  9 09:47:18 compute-2 systemd[1]: Started libvirt nodedev daemon.
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:19 compute-2 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  9 09:47:19 compute-2 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  9 09:47:19 compute-2 python3.9[121267]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:19 compute-2 systemd[1]: Reloading.
Oct  9 09:47:19 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:19 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  9 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  9 09:47:19 compute-2 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  9 09:47:19 compute-2 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  9 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon...
Oct  9 09:47:19 compute-2 systemd[1]: Started libvirt proxy daemon.
Oct  9 09:47:19 compute-2 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  9 09:47:19 compute-2 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  9 09:47:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:20 compute-2 python3.9[121486]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:20 compute-2 systemd[1]: Reloading.
Oct  9 09:47:20 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:20 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0eaec97f-aa6e-4607-a718-37c25c0f061f
Oct  9 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0eaec97f-aa6e-4607-a718-37c25c0f061f
Oct  9 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  9 09:47:20 compute-2 systemd[1]: Listening on libvirt locking daemon socket.
Oct  9 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  9 09:47:20 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  9 09:47:20 compute-2 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  9 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  9 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  9 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  9 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  9 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  9 09:47:20 compute-2 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  9 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:47:20 compute-2 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:47:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:21 compute-2 python3.9[121700]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:47:21 compute-2 systemd[1]: Reloading.
Oct  9 09:47:21 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:21 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon socket...
Oct  9 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon socket.
Oct  9 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  9 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  9 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  9 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  9 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:47:21 compute-2 systemd[1]: Started libvirt secret daemon.
Oct  9 09:47:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:21 compute-2 podman[121883]: 2025-10-09 09:47:21.874537421 +0000 UTC m=+0.056858475 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct  9 09:47:22 compute-2 python3.9[121928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:22 compute-2 auditd[732]: Audit daemon rotating log files
Oct  9 09:47:22 compute-2 python3.9[122088]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:22 compute-2 python3.9[122240]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:23 compute-2 python3.9[122458]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:47:24 compute-2 python3.9[122624]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:24 compute-2 python3.9[122745]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003243.971643-3361-54701178436486/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c150843fcb80d0d0a9968a12abeb036b918e43ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:25 compute-2 python3.9[122897]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 286f8bf0-da72-5823-9a4e-ac4457d9e609#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:25 compute-2 python3.9[123060]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:27 compute-2 python3.9[123549]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:28 compute-2 python3.9[123702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:28 compute-2 python3.9[123826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003247.7814064-3527-82530221301438/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:29 compute-2 python3.9[123978]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:30 compute-2 python3.9[124131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:30 compute-2 python3.9[124235]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:30 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  9 09:47:30 compute-2 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  9 09:47:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:30 compute-2 python3.9[124387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:31 compute-2 python3.9[124465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5mxafdoz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:31 compute-2 python3.9[124618]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:32 compute-2 python3.9[124696]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:32.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:32 compute-2 python3.9[124849]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:33 compute-2 python3[125002]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  9 09:47:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:33.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:33 compute-2 python3.9[125155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:34 compute-2 python3.9[125233]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:34 compute-2 python3.9[125386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:35 compute-2 python3.9[125464]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:35 compute-2 python3.9[125616]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:35 compute-2 python3.9[125695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:36 compute-2 python3.9[125848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:36 compute-2 python3.9[125926]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:37 compute-2 python3.9[126078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:37 compute-2 python3.9[126204]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003257.010631-3902-130414974550667/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-2 python3.9[126357]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:38 compute-2 python3.9[126509]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:39 compute-2 python3.9[126664]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:39.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:40 compute-2 python3.9[126817]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:40 compute-2 python3.9[126971]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:41 compute-2 python3.9[127125]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:47:41 compute-2 python3.9[127281]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:42 compute-2 python3.9[127433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:42 compute-2 podman[127529]: 2025-10-09 09:47:42.49154532 +0000 UTC m=+0.040853691 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:47:42 compute-2 python3.9[127573]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003261.8994794-4118-191446317715674/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:43 compute-2 python3.9[127725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:43 compute-2 python3.9[127848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003262.8906884-4163-8538300645112/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:44 compute-2 python3.9[128001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:47:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:44 compute-2 python3.9[128125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003263.8655121-4208-21489077032995/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:47:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:45 compute-2 python3.9[128277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:45 compute-2 systemd[1]: Reloading.
Oct  9 09:47:45 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:45 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:45 compute-2 systemd[1]: Reached target edpm_libvirt.target.
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:45.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:46 compute-2 python3.9[128469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  9 09:47:46 compute-2 systemd[1]: Reloading.
Oct  9 09:47:46 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:46 compute-2 systemd[1]: Reloading.
Oct  9 09:47:46 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:46 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:46.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:47 compute-2 systemd[1]: session-36.scope: Deactivated successfully.
Oct  9 09:47:47 compute-2 systemd[1]: session-36.scope: Consumed 2min 24.210s CPU time.
Oct  9 09:47:47 compute-2 systemd-logind[800]: Session 36 logged out. Waiting for processes to exit.
Oct  9 09:47:47 compute-2 systemd-logind[800]: Removed session 36.
Oct  9 09:47:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:49.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:52 compute-2 podman[128598]: 2025-10-09 09:47:52.219622021 +0000 UTC m=+0.055537484 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Oct  9 09:47:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:52 compute-2 systemd-logind[800]: New session 37 of user zuul.
Oct  9 09:47:52 compute-2 systemd[1]: Started Session 37 of User zuul.
Oct  9 09:47:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:53 compute-2 python3.9[128775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:47:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:53.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:54 compute-2 python3.9[128933]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:47:55 compute-2 python3.9[129085]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-2 python3.9[129237]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:56 compute-2 python3.9[129390]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:47:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:47:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:47:56 compute-2 python3.9[129543]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:47:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:57 compute-2 python3.9[129695]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:47:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:58 compute-2 python3.9[129851]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:47:58 compute-2 systemd[1]: Reloading.
Oct  9 09:47:58 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:47:58 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:47:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:47:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:59 compute-2 python3.9[130040]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:47:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:47:59 compute-2 network[130057]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:47:59 compute-2 network[130058]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:47:59 compute-2 network[130059]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:47:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:47:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:47:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:47:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:59.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:02 compute-2 python3.9[130336]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:02 compute-2 systemd[1]: Reloading.
Oct  9 09:48:02 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:02 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:03 compute-2 python3.9[130524]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:03.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:03 compute-2 python3.9[130677]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:48:04 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:48:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:06 compute-2 podman[130687]: 2025-10-09 09:48:06.46156246 +0000 UTC m=+2.500339583 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.548513767 +0000 UTC m=+0.025379559 container create cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5731] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-2 kernel: veth0: entered allmulticast mode
Oct  9 09:48:06 compute-2 kernel: veth0: entered promiscuous mode
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered blocking state
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered forwarding state
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5879] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5893] device (veth0): carrier: link connected
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5897] device (podman0): carrier: link connected
Oct  9 09:48:06 compute-2 systemd-udevd[130760]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-2 systemd-udevd[130763]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6094] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6103] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6110] device (podman0): Activation: starting connection 'podman0' (dcd2f42b-b181-4e8d-91dd-41a4682e8154)
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6112] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6117] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6120] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6124] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  9 09:48:06 compute-2 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  9 09:48:06 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6322] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6323] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6328] device (podman0): Activation: successful, device activated.
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.537051517 +0000 UTC m=+0.013917309 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:06 compute-2 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  9 09:48:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:06 compute-2 systemd[1]: Started libpod-conmon-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope.
Oct  9 09:48:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:06 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.817510919 +0000 UTC m=+0.294376721 container init cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.822447155 +0000 UTC m=+0.299312947 container start cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.823806087 +0000 UTC m=+0.300671869 container attach cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 09:48:06 compute-2 iscsid_config[130890]: iqn.1994-05.com.redhat:9c86a716692#015
Oct  9 09:48:06 compute-2 systemd[1]: libpod-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope: Deactivated successfully.
Oct  9 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.825787071 +0000 UTC m=+0.302652863 container died cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-2 kernel: veth0 (unregistering): left allmulticast mode
Oct  9 09:48:06 compute-2 kernel: veth0 (unregistering): left promiscuous mode
Oct  9 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct  9 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.8554] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 09:48:07 compute-2 systemd[1]: run-netns-netns\x2d09c06a0d\x2d2916\x2d83ef\x2d18ed\x2d275b7aea03c9.mount: Deactivated successfully.
Oct  9 09:48:07 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc-userdata-shm.mount: Deactivated successfully.
Oct  9 09:48:07 compute-2 podman[130737]: 2025-10-09 09:48:07.087679199 +0000 UTC m=+0.564544991 container remove cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:48:07 compute-2 systemd[1]: libpod-conmon-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope: Deactivated successfully.
Oct  9 09:48:07 compute-2 python3.9[130677]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct  9 09:48:07 compute-2 python3.9[130677]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  9 09:48:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-44bb6cfe5fc5e8430efc2845fc1bb2d5b9aae246adbe036623accd9cb8fedea9-merged.mount: Deactivated successfully.
Oct  9 09:48:07 compute-2 python3.9[131126]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:08 compute-2 python3.9[131249]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003287.3777783-319-224945256298722/.source.iscsi _original_basename=.iuewmxpg follow=False checksum=3866c381b9a8229703da1f68474b46516e3b1cdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:08 compute-2 python3.9[131402]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:09 compute-2 python3.9[131552]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:10 compute-2 python3.9[131732]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:48:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:10 compute-2 python3.9[131885]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:11 compute-2 python3.9[132037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:11 compute-2 python3.9[132116]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:12 compute-2 python3.9[132268]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:12 compute-2 python3.9[132347]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:12.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:12 compute-2 podman[132471]: 2025-10-09 09:48:12.956371803 +0000 UTC m=+0.065296423 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:13 compute-2 python3.9[132515]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:13 compute-2 python3.9[132668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:14 compute-2 python3.9[132746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:14 compute-2 python3.9[132899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:15 compute-2 python3.9[132977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:15 compute-2 python3.9[133129]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:15 compute-2 systemd[1]: Reloading.
Oct  9 09:48:15 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:15 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:15.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:16 compute-2 python3.9[133320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:16 compute-2 python3.9[133398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:16 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  9 09:48:17 compute-2 python3.9[133550]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:17 compute-2 python3.9[133628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:17.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:18 compute-2 python3.9[133781]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:18 compute-2 systemd[1]: Reloading.
Oct  9 09:48:18 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:18 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:18 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:48:18 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:18 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:18 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:48:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:18.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:19 compute-2 python3.9[133975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:19.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:19 compute-2 python3.9[134128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:20 compute-2 python3.9[134252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003299.5423007-782-67805315050740/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:20.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:21 compute-2 python3.9[134404]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:21 compute-2 python3.9[134556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:21.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:21 compute-2 python3.9[134680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003301.236646-856-136711426623362/.source.json _original_basename=.sdhh2sjh follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:22 compute-2 podman[134805]: 2025-10-09 09:48:22.35171856 +0000 UTC m=+0.060620750 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:48:22 compute-2 python3.9[134849]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:23.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:24 compute-2 python3.9[135284]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  9 09:48:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:24 compute-2 python3.9[135437]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:25 compute-2 python3.9[135590]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:25.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:27 compute-2 python3[135812]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:48:27 compute-2 podman[135889]: 2025-10-09 09:48:27.538517203 +0000 UTC m=+0.041219203 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  9 09:48:27 compute-2 podman[135906]: 2025-10-09 09:48:27.552505543 +0000 UTC m=+0.031030178 container create 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible)
Oct  9 09:48:27 compute-2 podman[135906]: 2025-10-09 09:48:27.537966435 +0000 UTC m=+0.016491070 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:27 compute-2 python3[135812]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  9 09:48:27 compute-2 podman[135943]: 2025-10-09 09:48:27.671931497 +0000 UTC m=+0.044146854 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:48:27 compute-2 podman[135889]: 2025-10-09 09:48:27.675694152 +0000 UTC m=+0.178396143 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  9 09:48:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:27 compute-2 podman[136084]: 2025-10-09 09:48:27.932175137 +0000 UTC m=+0.037885455 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:48:27 compute-2 podman[136084]: 2025-10-09 09:48:27.943005521 +0000 UTC m=+0.048715830 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:48:28 compute-2 python3.9[136220]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:28 compute-2 podman[136261]: 2025-10-09 09:48:28.246989287 +0000 UTC m=+0.036576368 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:48:28 compute-2 podman[136261]: 2025-10-09 09:48:28.255031943 +0000 UTC m=+0.044619025 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:48:28 compute-2 podman[136339]: 2025-10-09 09:48:28.403133805 +0000 UTC m=+0.035330838 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vcs-type=git, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct  9 09:48:28 compute-2 podman[136339]: 2025-10-09 09:48:28.41507816 +0000 UTC m=+0.047275192 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, distribution-scope=public, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct  9 09:48:28 compute-2 podman[136380]: 2025-10-09 09:48:28.526940892 +0000 UTC m=+0.034837398 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  9 09:48:28 compute-2 podman[136380]: 2025-10-09 09:48:28.537056387 +0000 UTC m=+0.044952894 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  9 09:48:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:28 compute-2 python3.9[136613]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:29 compute-2 python3.9[136706]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:48:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:29 compute-2 python3.9[136870]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003309.3228939-1120-181493640793045/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:30 compute-2 python3.9[136946]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:48:30 compute-2 systemd[1]: Reloading.
Oct  9 09:48:30 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:30 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:30 compute-2 ceph-mon[5983]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct  9 09:48:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:30 compute-2 python3.9[137082]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:31 compute-2 systemd[1]: Reloading.
Oct  9 09:48:31 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:31 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:31 compute-2 systemd[1]: Starting iscsid container...
Oct  9 09:48:31 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:48:31 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41.
Oct  9 09:48:31 compute-2 podman[137122]: 2025-10-09 09:48:31.367833764 +0000 UTC m=+0.084678890 container init 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  9 09:48:31 compute-2 iscsid[137134]: + sudo -E kolla_set_configs
Oct  9 09:48:31 compute-2 podman[137122]: 2025-10-09 09:48:31.391658352 +0000 UTC m=+0.108503458 container start 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:31 compute-2 systemd[1]: Created slice User Slice of UID 0.
Oct  9 09:48:31 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  9 09:48:31 compute-2 podman[137122]: iscsid
Oct  9 09:48:31 compute-2 systemd[1]: Started iscsid container.
Oct  9 09:48:31 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  9 09:48:31 compute-2 systemd[1]: Starting User Manager for UID 0...
Oct  9 09:48:31 compute-2 podman[137141]: 2025-10-09 09:48:31.459493777 +0000 UTC m=+0.059123927 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:48:31 compute-2 systemd[1]: 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41-44090ffe15e06773.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:48:31 compute-2 systemd[1]: 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41-44090ffe15e06773.service: Failed with result 'exit-code'.
Oct  9 09:48:31 compute-2 systemd[137149]: Queued start job for default target Main User Target.
Oct  9 09:48:31 compute-2 systemd[137149]: Created slice User Application Slice.
Oct  9 09:48:31 compute-2 systemd[137149]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  9 09:48:31 compute-2 systemd[137149]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:31 compute-2 systemd[137149]: Reached target Paths.
Oct  9 09:48:31 compute-2 systemd[137149]: Reached target Timers.
Oct  9 09:48:31 compute-2 systemd[137149]: Starting D-Bus User Message Bus Socket...
Oct  9 09:48:31 compute-2 systemd[137149]: Starting Create User's Volatile Files and Directories...
Oct  9 09:48:31 compute-2 systemd[137149]: Listening on D-Bus User Message Bus Socket.
Oct  9 09:48:31 compute-2 systemd[137149]: Reached target Sockets.
Oct  9 09:48:31 compute-2 systemd[137149]: Finished Create User's Volatile Files and Directories.
Oct  9 09:48:31 compute-2 systemd[137149]: Reached target Basic System.
Oct  9 09:48:31 compute-2 systemd[137149]: Reached target Main User Target.
Oct  9 09:48:31 compute-2 systemd[137149]: Startup finished in 96ms.
Oct  9 09:48:31 compute-2 systemd[1]: Started User Manager for UID 0.
Oct  9 09:48:31 compute-2 systemd[1]: Started Session c3 of User root.
Oct  9 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Validating config file
Oct  9 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Writing out command to execute
Oct  9 09:48:31 compute-2 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  9 09:48:31 compute-2 iscsid[137134]: ++ cat /run_command
Oct  9 09:48:31 compute-2 iscsid[137134]: + CMD='/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-2 iscsid[137134]: + ARGS=
Oct  9 09:48:31 compute-2 iscsid[137134]: + sudo kolla_copy_cacerts
Oct  9 09:48:31 compute-2 systemd[1]: Started Session c4 of User root.
Oct  9 09:48:31 compute-2 iscsid[137134]: Running command: '/usr/sbin/iscsid -f'
Oct  9 09:48:31 compute-2 iscsid[137134]: + [[ ! -n '' ]]
Oct  9 09:48:31 compute-2 iscsid[137134]: + . kolla_extend_start
Oct  9 09:48:31 compute-2 iscsid[137134]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  9 09:48:31 compute-2 iscsid[137134]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  9 09:48:31 compute-2 iscsid[137134]: + umask 0022
Oct  9 09:48:31 compute-2 iscsid[137134]: + exec /usr/sbin/iscsid -f
Oct  9 09:48:31 compute-2 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  9 09:48:31 compute-2 kernel: Loading iSCSI transport class v2.0-870.
Oct  9 09:48:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:31 compute-2 python3.9[137337]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:32 compute-2 python3.9[137490]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:33 compute-2 python3.9[137667]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:48:33 compute-2 network[137684]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:48:33 compute-2 network[137685]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:48:33 compute-2 network[137686]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:48:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:34.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-2 python3.9[137966]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:48:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:38 compute-2 python3.9[138119]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  9 09:48:39 compute-2 python3.9[138275]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:39 compute-2 python3.9[138399]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003319.1033359-1343-120866255677436/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:40 compute-2 python3.9[138552]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:41 compute-2 python3.9[138704]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:48:41 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:48:41 compute-2 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:48:41 compute-2 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:48:41 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:48:41 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:48:41 compute-2 systemd[1]: Stopping User Manager for UID 0...
Oct  9 09:48:41 compute-2 systemd[137149]: Activating special unit Exit the Session...
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped target Main User Target.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped target Basic System.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped target Paths.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped target Sockets.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped target Timers.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:48:41 compute-2 systemd[137149]: Closed D-Bus User Message Bus Socket.
Oct  9 09:48:41 compute-2 systemd[137149]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:48:41 compute-2 systemd[137149]: Removed slice User Application Slice.
Oct  9 09:48:41 compute-2 systemd[137149]: Reached target Shutdown.
Oct  9 09:48:41 compute-2 systemd[137149]: Finished Exit the Session.
Oct  9 09:48:41 compute-2 systemd[137149]: Reached target Exit the Session.
Oct  9 09:48:41 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-2 systemd[1]: Stopped User Manager for UID 0.
Oct  9 09:48:41 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  9 09:48:41 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  9 09:48:41 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  9 09:48:41 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  9 09:48:41 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Oct  9 09:48:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:41 compute-2 python3.9[138862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:42 compute-2 python3.9[139015]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:43 compute-2 podman[139139]: 2025-10-09 09:48:43.098931186 +0000 UTC m=+0.064236939 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:43 compute-2 python3.9[139184]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:43 compute-2 python3.9[139337]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:44 compute-2 python3.9[139460]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003323.4419153-1517-54096167749765/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:44 compute-2 python3.9[139613]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:45 compute-2 python3.9[139766]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:46 compute-2 python3.9[139919]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:46 compute-2 python3.9[140072]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:47 compute-2 python3.9[140224]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-2 python3.9[140377]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:47.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:48 compute-2 python3.9[140529]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-2 python3.9[140682]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:49 compute-2 python3.9[140834]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:48:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:49 compute-2 python3.9[140989]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:50 compute-2 python3.9[141167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:50 compute-2 python3.9[141319]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-2 python3.9[141397]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:51 compute-2 python3.9[141550]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:48:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:51.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:48:52 compute-2 python3.9[141628]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:52 compute-2 podman[141753]: 2025-10-09 09:48:52.485070155 +0000 UTC m=+0.055552182 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:48:52 compute-2 python3.9[141799]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:53 compute-2 python3.9[141956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:53 compute-2 python3.9[142034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:54 compute-2 python3.9[142187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:54 compute-2 python3.9[142266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:48:55 compute-2 python3.9[142418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:48:55 compute-2 systemd[1]: Reloading.
Oct  9 09:48:55 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:55 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:55 compute-2 python3.9[142609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:48:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:48:56 compute-2 python3.9[142687]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:56 compute-2 python3.9[142840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:57 compute-2 python3.9[142918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:48:57 compute-2 python3.9[143070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:48:57 compute-2 systemd[1]: Reloading.
Oct  9 09:48:57 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:48:57 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:48:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:57 compute-2 systemd[1]: Starting Create netns directory...
Oct  9 09:48:57 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  9 09:48:57 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  9 09:48:57 compute-2 systemd[1]: Finished Create netns directory.
Oct  9 09:48:58 compute-2 python3.9[143265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:58.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:48:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:59 compute-2 python3.9[143417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:48:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:48:59 compute-2 python3.9[143540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003338.89317-2138-76611640407289/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:48:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:48:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:48:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:48:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:00 compute-2 python3.9[143694]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:00.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:00 compute-2 python3.9[143846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:01 compute-2 python3.9[143969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003340.616925-2212-230479721977533/.source.json _original_basename=.unvqt7g5 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:01 compute-2 podman[144094]: 2025-10-09 09:49:01.791777299 +0000 UTC m=+0.036718135 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  9 09:49:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:01 compute-2 python3.9[144135]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:03 compute-2 python3.9[144567]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  9 09:49:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:04 compute-2 python3.9[144720]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:49:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:04.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:05 compute-2 python3.9[144872]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  9 09:49:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:06 compute-2 python3[145045]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:49:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:08 compute-2 podman[145056]: 2025-10-09 09:49:08.439986535 +0000 UTC m=+1.735505063 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-2 podman[145104]: 2025-10-09 09:49:08.530462145 +0000 UTC m=+0.027020918 container create 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct  9 09:49:08 compute-2 podman[145104]: 2025-10-09 09:49:08.516679363 +0000 UTC m=+0.013238137 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-2 python3[145045]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  9 09:49:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:09 compute-2 python3.9[145283]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:09 compute-2 python3.9[145438]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:10 compute-2 python3.9[145514]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:49:10 compute-2 python3.9[145691]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003350.1851814-2476-248700953731878/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:11 compute-2 python3.9[145767]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:11 compute-2 systemd[1]: Reloading.
Oct  9 09:49:11 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:11 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:11 compute-2 python3.9[145879]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:11 compute-2 systemd[1]: Reloading.
Oct  9 09:49:11 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:11 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:12 compute-2 systemd[1]: Starting multipathd container...
Oct  9 09:49:12 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:49:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:12 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct  9 09:49:12 compute-2 podman[145919]: 2025-10-09 09:49:12.159505188 +0000 UTC m=+0.076622456 container init 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:49:12 compute-2 multipathd[145931]: + sudo -E kolla_set_configs
Oct  9 09:49:12 compute-2 podman[145919]: 2025-10-09 09:49:12.180552579 +0000 UTC m=+0.097669837 container start 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:49:12 compute-2 podman[145919]: multipathd
Oct  9 09:49:12 compute-2 systemd[1]: Started multipathd container.
Oct  9 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Validating config file
Oct  9 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Writing out command to execute
Oct  9 09:49:12 compute-2 multipathd[145931]: ++ cat /run_command
Oct  9 09:49:12 compute-2 multipathd[145931]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-2 multipathd[145931]: + ARGS=
Oct  9 09:49:12 compute-2 multipathd[145931]: + sudo kolla_copy_cacerts
Oct  9 09:49:12 compute-2 multipathd[145931]: + [[ ! -n '' ]]
Oct  9 09:49:12 compute-2 multipathd[145931]: + . kolla_extend_start
Oct  9 09:49:12 compute-2 multipathd[145931]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:12 compute-2 multipathd[145931]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:12 compute-2 multipathd[145931]: + umask 0022
Oct  9 09:49:12 compute-2 multipathd[145931]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:12 compute-2 podman[145939]: 2025-10-09 09:49:12.244489592 +0000 UTC m=+0.056543671 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct  9 09:49:12 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:12 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.service: Failed with result 'exit-code'.
Oct  9 09:49:12 compute-2 multipathd[145931]: 1036.744954 | --------start up--------
Oct  9 09:49:12 compute-2 multipathd[145931]: 1036.745033 | read /etc/multipath.conf
Oct  9 09:49:12 compute-2 multipathd[145931]: 1036.748804 | path checkers start up
Oct  9 09:49:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:12 compute-2 python3.9[146119]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:13 compute-2 podman[146246]: 2025-10-09 09:49:13.159425837 +0000 UTC m=+0.038475769 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  9 09:49:13 compute-2 python3.9[146287]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:13 compute-2 python3.9[146451]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:14 compute-2 systemd[1]: Stopping multipathd container...
Oct  9 09:49:14 compute-2 multipathd[145931]: 1038.531729 | exit (signal)
Oct  9 09:49:14 compute-2 multipathd[145931]: 1038.531767 | --------shut down-------
Oct  9 09:49:14 compute-2 systemd[1]: libpod-22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.scope: Deactivated successfully.
Oct  9 09:49:14 compute-2 podman[146455]: 2025-10-09 09:49:14.070586483 +0000 UTC m=+0.053483170 container died 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.timer: Deactivated successfully.
Oct  9 09:49:14 compute-2 systemd[1]: Stopped /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct  9 09:49:14 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-userdata-shm.mount: Deactivated successfully.
Oct  9 09:49:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb-merged.mount: Deactivated successfully.
Oct  9 09:49:14 compute-2 podman[146455]: 2025-10-09 09:49:14.139579952 +0000 UTC m=+0.122476638 container cleanup 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  9 09:49:14 compute-2 podman[146455]: multipathd
Oct  9 09:49:14 compute-2 podman[146481]: multipathd
Oct  9 09:49:14 compute-2 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  9 09:49:14 compute-2 systemd[1]: Stopped multipathd container.
Oct  9 09:49:14 compute-2 systemd[1]: Starting multipathd container...
Oct  9 09:49:14 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:49:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:49:14 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct  9 09:49:14 compute-2 podman[146490]: 2025-10-09 09:49:14.276273098 +0000 UTC m=+0.070530247 container init 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:49:14 compute-2 multipathd[146502]: + sudo -E kolla_set_configs
Oct  9 09:49:14 compute-2 podman[146490]: 2025-10-09 09:49:14.293894288 +0000 UTC m=+0.088151418 container start 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:49:14 compute-2 podman[146490]: multipathd
Oct  9 09:49:14 compute-2 systemd[1]: Started multipathd container.
Oct  9 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Validating config file
Oct  9 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Writing out command to execute
Oct  9 09:49:14 compute-2 multipathd[146502]: ++ cat /run_command
Oct  9 09:49:14 compute-2 multipathd[146502]: + CMD='/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-2 multipathd[146502]: + ARGS=
Oct  9 09:49:14 compute-2 multipathd[146502]: + sudo kolla_copy_cacerts
Oct  9 09:49:14 compute-2 multipathd[146502]: Running command: '/usr/sbin/multipathd -d'
Oct  9 09:49:14 compute-2 multipathd[146502]: + [[ ! -n '' ]]
Oct  9 09:49:14 compute-2 multipathd[146502]: + . kolla_extend_start
Oct  9 09:49:14 compute-2 multipathd[146502]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  9 09:49:14 compute-2 multipathd[146502]: + umask 0022
Oct  9 09:49:14 compute-2 multipathd[146502]: + exec /usr/sbin/multipathd -d
Oct  9 09:49:14 compute-2 multipathd[146502]: 1038.860461 | --------start up--------
Oct  9 09:49:14 compute-2 multipathd[146502]: 1038.860723 | read /etc/multipath.conf
Oct  9 09:49:14 compute-2 podman[146509]: 2025-10-09 09:49:14.369387754 +0000 UTC m=+0.066486492 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  9 09:49:14 compute-2 multipathd[146502]: 1038.864244 | path checkers start up
Oct  9 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-68de4bb5e73be16b.service: Main process exited, code=exited, status=1/FAILURE
Oct  9 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-68de4bb5e73be16b.service: Failed with result 'exit-code'.
Oct  9 09:49:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:14 compute-2 python3.9[146691]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:15 compute-2 python3.9[146844]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  9 09:49:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:16 compute-2 python3.9[146997]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  9 09:49:16 compute-2 kernel: Key type psk registered
Oct  9 09:49:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:16.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:17 compute-2 python3.9[147160]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:49:17 compute-2 python3.9[147283]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003356.6451046-2716-182697927741461/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:18 compute-2 python3.9[147436]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:18 compute-2 python3.9[147589]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:49:18 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  9 09:49:18 compute-2 systemd[1]: Stopped Load Kernel Modules.
Oct  9 09:49:18 compute-2 systemd[1]: Stopping Load Kernel Modules...
Oct  9 09:49:18 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct  9 09:49:18 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct  9 09:49:18 compute-2 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  9 09:49:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:19 compute-2 python3.9[147746]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  9 09:49:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:19 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  9 09:49:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:49:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:49:20 compute-2 python3.9[147832]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  9 09:49:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:23 compute-2 podman[147837]: 2025-10-09 09:49:23.225360822 +0000 UTC m=+0.059998517 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:49:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:49:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:24.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:25 compute-2 systemd[1]: Reloading.
Oct  9 09:49:25 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:25 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:25 compute-2 systemd[1]: Reloading.
Oct  9 09:49:25 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:25 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:25 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  9 09:49:25 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  9 09:49:26 compute-2 lvm[147969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 09:49:26 compute-2 lvm[147969]: VG ceph_vg0 finished
Oct  9 09:49:26 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  9 09:49:26 compute-2 systemd[1]: Starting man-db-cache-update.service...
Oct  9 09:49:26 compute-2 systemd[1]: Reloading.
Oct  9 09:49:26 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:26 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:26 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  9 09:49:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:26 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  9 09:49:26 compute-2 systemd[1]: Finished man-db-cache-update.service.
Oct  9 09:49:26 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.021s CPU time.
Oct  9 09:49:26 compute-2 systemd[1]: run-rd3290976340a4b3698477c5ce5214fbb.service: Deactivated successfully.
Oct  9 09:49:27 compute-2 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  9 09:49:27 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 09:49:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:28 compute-2 python3.9[149312]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:28 compute-2 python3.9[149463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  9 09:49:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:29 compute-2 python3.9[149620]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:30 compute-2 python3.9[149798]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:30 compute-2 systemd[1]: Reloading.
Oct  9 09:49:30 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:30 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:31 compute-2 python3.9[149982]: ansible-ansible.builtin.service_facts Invoked
Oct  9 09:49:31 compute-2 network[150000]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  9 09:49:31 compute-2 network[150001]: 'network-scripts' will be removed from distribution in near future.
Oct  9 09:49:31 compute-2 network[150002]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  9 09:49:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:32 compute-2 podman[150009]: 2025-10-09 09:49:32.403716124 +0000 UTC m=+0.053890289 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:49:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:32.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:49:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:34.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:35 compute-2 python3.9[150379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:36 compute-2 python3.9[150533]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:36 compute-2 python3.9[150687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:36.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:49:37 compute-2 python3.9[150840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:37.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:37 compute-2 python3.9[151019]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:38 compute-2 python3.9[151173]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:38.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:39 compute-2 python3.9[151326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:39 compute-2 python3.9[151479]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:49:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:40.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:40 compute-2 python3.9[151634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-2 python3.9[151786]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:41 compute-2 python3.9[151939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:42 compute-2 python3.9[152091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-2 python3.9[152244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:42.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:43 compute-2 python3.9[152396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-2 podman[152520]: 2025-10-09 09:49:43.509494249 +0000 UTC m=+0.045130359 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:49:43 compute-2 python3.9[152565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:44 compute-2 python3.9[152717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:44 compute-2 podman[152842]: 2025-10-09 09:49:44.595562331 +0000 UTC m=+0.046644261 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:49:44 compute-2 python3.9[152886]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:45 compute-2 python3.9[153038]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-2 python3.9[153190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:46 compute-2 python3.9[153343]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-2 python3.9[153496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:46 compute-2 python3.9[153648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-2 python3.9[153800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:47 compute-2 python3.9[153953]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:49:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:48 compute-2 python3.9[154106]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:49 compute-2 python3.9[154258]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  9 09:49:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:50 compute-2 python3.9[154411]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:49:50 compute-2 systemd[1]: Reloading.
Oct  9 09:49:50 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:49:50 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:49:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:49:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:49:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:50 compute-2 python3.9[154623]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-2 python3.9[154776]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:51 compute-2 python3.9[154930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:52 compute-2 python3.9[155083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-2 python3.9[155237]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:53 compute-2 python3.9[155390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:53 compute-2 podman[155516]: 2025-10-09 09:49:53.752412948 +0000 UTC m=+0.062267427 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:53 compute-2 python3.9[155560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:54 compute-2 python3.9[155721]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  9 09:49:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:55 compute-2 python3.9[155875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:55.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:56 compute-2 python3.9[156028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:49:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:57 compute-2 python3.9[156180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-2 python3.9[156332]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:49:58 compute-2 python3.9[156485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:49:58 compute-2 python3.9[156638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:49:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:58.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:58 compute-2 python3.9[156790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-2 python3.9[156942]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:49:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:49:59 compute-2 python3.9[157095]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:49:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:49:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:49:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:00 compute-2 systemd[1]: Starting system activity accounting tool...
Oct  9 09:50:00 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 09:50:00 compute-2 systemd[1]: Finished system activity accounting tool.
Oct  9 09:50:00 compute-2 python3.9[157249]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:00 compute-2 ceph-mon[5983]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-2 ceph-mon[5983]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct  9 09:50:00 compute-2 ceph-mon[5983]:    daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct  9 09:50:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:00 compute-2 python3.9[157401]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:01 compute-2 python3.9[157553]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:50:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:50:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:03 compute-2 podman[157580]: 2025-10-09 09:50:03.205337179 +0000 UTC m=+0.039792499 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=iscsid)
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:06 compute-2 python3.9[157728]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  9 09:50:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:07 compute-2 python3.9[157881]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  9 09:50:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:07 compute-2 python3.9[158040]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  9 09:50:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:07.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:08 compute-2 systemd-logind[800]: New session 39 of user zuul.
Oct  9 09:50:08 compute-2 systemd[1]: Started Session 39 of User zuul.
Oct  9 09:50:08 compute-2 systemd[1]: session-39.scope: Deactivated successfully.
Oct  9 09:50:08 compute-2 systemd-logind[800]: Session 39 logged out. Waiting for processes to exit.
Oct  9 09:50:08 compute-2 systemd-logind[800]: Removed session 39.
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:09 compute-2 python3.9[158227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:09 compute-2 python3.9[158348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003408.9376636-4354-218489464400853/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:10 compute-2 python3.9[158499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:10 compute-2 python3.9[158576]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:10 compute-2 python3.9[158751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:11 compute-2 python3.9[158872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003410.5074158-4354-142180640155539/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:11 compute-2 python3.9[159023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:12 compute-2 python3.9[159144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003411.336021-4354-226122153270060/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:12 compute-2 python3.9[159295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:12 compute-2 python3.9[159416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003412.1994257-4354-258665901310620/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:13 compute-2 python3.9[159568]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:13 compute-2 podman[159570]: 2025-10-09 09:50:13.661287643 +0000 UTC m=+0.036879943 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:13.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:14 compute-2 python3.9[159737]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:14 compute-2 python3.9[159890]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:14.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:15 compute-2 podman[160014]: 2025-10-09 09:50:15.052559858 +0000 UTC m=+0.063203086 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 09:50:15 compute-2 python3.9[160059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:15 compute-2 python3.9[160182]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760003414.819866-4634-79753486685892/.source _original_basename=.jpwrnzve follow=False checksum=aa4ab1190f26dbb82ebaaa3faed5c12a78625b91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  9 09:50:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct  9 09:50:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct  9 09:50:16 compute-2 python3.9[160335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:16.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:16 compute-2 python3.9[160488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:17 compute-2 python3.9[160609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003416.4954014-4711-14755408612466/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:17 compute-2 python3.9[160760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:18 compute-2 python3.9[160881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003417.3833544-4756-48525577080055/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  9 09:50:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:18.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:18 compute-2 python3.9[161034]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  9 09:50:19 compute-2 python3.9[161186]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:20 compute-2 python3[161339]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:22.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:24 compute-2 podman[161375]: 2025-10-09 09:50:24.220122186 +0000 UTC m=+0.057324983 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct  9 09:50:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct  9 09:50:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:25.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:27.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:32 compute-2 podman[161350]: 2025-10-09 09:50:32.881423845 +0000 UTC m=+12.600853015 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:32 compute-2 podman[161475]: 2025-10-09 09:50:32.976560361 +0000 UTC m=+0.028437651 container create 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:32 compute-2 podman[161475]: 2025-10-09 09:50:32.9630418 +0000 UTC m=+0.014919110 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:32 compute-2 python3[161339]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  9 09:50:33 compute-2 podman[161627]: 2025-10-09 09:50:33.634526495 +0000 UTC m=+0.042787828 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:33 compute-2 python3.9[161671]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:33.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:34 compute-2 python3.9[161827]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  9 09:50:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:35 compute-2 python3.9[161979]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  9 09:50:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:36 compute-2 python3[162132]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  9 09:50:36 compute-2 podman[162160]: 2025-10-09 09:50:36.181682672 +0000 UTC m=+0.028527981 container create b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:36 compute-2 podman[162160]: 2025-10-09 09:50:36.168370851 +0000 UTC m=+0.015216160 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  9 09:50:36 compute-2 python3[162132]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:36 compute-2 python3.9[162340]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:36.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:37 compute-2 python3.9[162532]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:50:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:37.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct  9 09:50:37 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:37.998413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct  9 09:50:37 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003437998436, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4682, "num_deletes": 502, "total_data_size": 12784616, "memory_usage": 12956176, "flush_reason": "Manual Compaction"}
Oct  9 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct  9 09:50:38 compute-2 python3.9[162725]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003437.5086284-5031-30989634723154/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438016523, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8291926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13232, "largest_seqno": 17909, "table_properties": {"data_size": 8274219, "index_size": 11961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36680, "raw_average_key_size": 19, "raw_value_size": 8237464, "raw_average_value_size": 4428, "num_data_blocks": 522, "num_entries": 1860, "num_filter_entries": 1860, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002995, "oldest_key_time": 1760002995, "file_creation_time": 1760003437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18141 microseconds, and 10112 cpu microseconds.
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.016553) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8291926 bytes OK
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.016566) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018253) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018265) EVENT_LOG_v1 {"time_micros": 1760003438018262, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12764192, prev total WAL file size 12764192, number of live WAL files 2.
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.019805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8097KB)], [27(11MB)]
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438019865, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19849869, "oldest_snapshot_seqno": -1}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4994 keys, 15244081 bytes, temperature: kUnknown
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438092756, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15244081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15205925, "index_size": 24542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 124761, "raw_average_key_size": 24, "raw_value_size": 15110539, "raw_average_value_size": 3025, "num_data_blocks": 1034, "num_entries": 4994, "num_filter_entries": 4994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.093908) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15244081 bytes
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.094988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.2 rd, 209.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6017, records dropped: 1023 output_compression: NoCompression
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.095007) EVENT_LOG_v1 {"time_micros": 1760003438094999, "job": 14, "event": "compaction_finished", "compaction_time_micros": 72926, "compaction_time_cpu_micros": 22502, "output_level": 6, "num_output_files": 1, "total_output_size": 15244081, "num_input_records": 6017, "num_output_records": 4994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438096384, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438097817, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.019741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:50:38 compute-2 python3.9[162803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  9 09:50:38 compute-2 systemd[1]: Reloading.
Oct  9 09:50:38 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:38 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:39 compute-2 python3.9[162914]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  9 09:50:39 compute-2 systemd[1]: Reloading.
Oct  9 09:50:39 compute-2 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  9 09:50:39 compute-2 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  9 09:50:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:39 compute-2 systemd[1]: Starting nova_compute container...
Oct  9 09:50:39 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:39 compute-2 podman[162955]: 2025-10-09 09:50:39.605735169 +0000 UTC m=+0.072724334 container init b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:50:39 compute-2 podman[162955]: 2025-10-09 09:50:39.61053194 +0000 UTC m=+0.077521086 container start b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute)
Oct  9 09:50:39 compute-2 podman[162955]: nova_compute
Oct  9 09:50:39 compute-2 nova_compute[162967]: + sudo -E kolla_set_configs
Oct  9 09:50:39 compute-2 systemd[1]: Started nova_compute container.
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Validating config file
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying service configuration files
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Writing out command to execute
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:39 compute-2 nova_compute[162967]: ++ cat /run_command
Oct  9 09:50:39 compute-2 nova_compute[162967]: + CMD=nova-compute
Oct  9 09:50:39 compute-2 nova_compute[162967]: + ARGS=
Oct  9 09:50:39 compute-2 nova_compute[162967]: + sudo kolla_copy_cacerts
Oct  9 09:50:39 compute-2 nova_compute[162967]: + [[ ! -n '' ]]
Oct  9 09:50:39 compute-2 nova_compute[162967]: + . kolla_extend_start
Oct  9 09:50:39 compute-2 nova_compute[162967]: Running command: 'nova-compute'
Oct  9 09:50:39 compute-2 nova_compute[162967]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:39 compute-2 nova_compute[162967]: + umask 0022
Oct  9 09:50:39 compute-2 nova_compute[162967]: + exec nova-compute
Oct  9 09:50:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:40 compute-2 python3.9[163130]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:40.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:41 compute-2 python3.9[163280]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.499 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.499 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.500 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.500 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.619 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.629 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.064 2 INFO nova.virt.driver [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.145 2 INFO nova.compute.provider_config [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.152 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 python3.9[163460]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 WARNING oslo_config.cfg [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:42 compute-2 nova_compute[162967]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:42 compute-2 nova_compute[162967]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:42 compute-2 nova_compute[162967]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:42 compute-2 nova_compute[162967]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.284 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.284 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.325 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:42 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Oct  9 09:50:42 compute-2 systemd[1]: Started libvirt QEMU daemon.
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.373 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe1f0c504c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.375 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe1f0c504c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.376 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.386 2 WARNING nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.386 2 DEBUG nova.virt.libvirt.volume.mount [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:50:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:42 compute-2 python3.9[163665]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.061 2 INFO nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: 
Oct  9 09:50:43 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <host>
Oct  9 09:50:43 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <uuid>ed712924-75ec-452a-a842-ae61b9b9ed0c</uuid>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <arch>x86_64</arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <microcode version='167776725'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='x2apic'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='tsc-deadline'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='osxsave'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='hypervisor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='tsc_adjust'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='ospke'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='vaes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='spec-ctrl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='stibp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='arch-capabilities'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='cmp_legacy'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='virt-ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='lbrv'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='tsc-scale'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='vmcb-clean'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='pause-filter'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='pfthreshold'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='vgif'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='rdctl-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='mds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <power_management>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <suspend_mem/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </power_management>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <iommu support='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <migration_features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <live/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <uri_transports>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </uri_transports>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </migration_features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <topology>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <cells num='1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <cell id='0'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <distances>
Oct  9 09:50:43 compute-2 nova_compute[162967]:            <sibling id='0' value='10'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          </distances>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          <cpus num='4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:          </cpus>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        </cell>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </cells>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </topology>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <cache>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </cache>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <secmodel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model>selinux</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <doi>0</doi>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </secmodel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <secmodel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model>dac</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <doi>0</doi>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </secmodel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </host>
Oct  9 09:50:43 compute-2 nova_compute[162967]: 
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <guest>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <arch name='i686'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <wordsize>32</wordsize>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <pae/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <nonpae/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <cpuselection/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <deviceboot/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <externalSnapshot/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </guest>
Oct  9 09:50:43 compute-2 nova_compute[162967]: 
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <guest>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <os_type>hvm</os_type>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <arch name='x86_64'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <wordsize>64</wordsize>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <domain type='qemu'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <domain type='kvm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <apic default='on' toggle='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <cpuselection/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <deviceboot/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <externalSnapshot/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </guest>
Oct  9 09:50:43 compute-2 nova_compute[162967]: 
Oct  9 09:50:43 compute-2 nova_compute[162967]: </capabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: #033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.065 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.079 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <arch>i686</arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <os supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <loader supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>rom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pflash</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='readonly'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>yes</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='secure'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </loader>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </os>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>file</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>anonymous</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>memfd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </memoryBacking>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <disk supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>disk</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cdrom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>floppy</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>lun</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ide</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>fdc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>sata</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </disk>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vnc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>dbus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </graphics>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <video supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='modelType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vga</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cirrus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>none</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>bochs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ramfb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </video>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='mode'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>subsystem</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>mandatory</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>requisite</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>optional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pci</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hostdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <rng supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>random</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </rng>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='driverType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>path</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>handle</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </filesystem>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emulator</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>external</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>2.0</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </tpm>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </redirdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <channel supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pty</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>unix</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </channel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>qemu</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </crypto>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <interface supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>passt</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </interface>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <panic supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>isa</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>hyperv</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </panic>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <gic supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sev supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='features'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>relaxed</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vapic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vpindex</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>runtime</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>synic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>stimer</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reset</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>frequencies</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ipi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>avic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hyperv>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.082 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <arch>i686</arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <os supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <loader supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>rom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pflash</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='readonly'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>yes</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='secure'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </loader>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </os>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>file</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>anonymous</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>memfd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </memoryBacking>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <disk supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>disk</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cdrom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>floppy</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>lun</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>fdc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>sata</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </disk>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vnc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>dbus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </graphics>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <video supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='modelType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vga</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cirrus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>none</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>bochs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ramfb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </video>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='mode'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>subsystem</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>mandatory</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>requisite</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>optional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pci</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hostdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <rng supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>random</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </rng>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='driverType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>path</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>handle</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </filesystem>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emulator</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>external</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>2.0</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </tpm>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </redirdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <channel supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pty</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>unix</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </channel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>qemu</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </crypto>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <interface supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>passt</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </interface>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <panic supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>isa</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>hyperv</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </panic>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <gic supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sev supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='features'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>relaxed</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vapic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vpindex</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>runtime</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>synic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>stimer</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reset</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>frequencies</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ipi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>avic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hyperv>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.113 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.115 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <vcpu max='240'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <os supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='firmware'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <loader supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>rom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pflash</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='readonly'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>yes</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='secure'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </loader>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </os>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>file</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>anonymous</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>memfd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </memoryBacking>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <disk supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>disk</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cdrom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>floppy</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>lun</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ide</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>fdc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>sata</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </disk>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vnc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>dbus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </graphics>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <video supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='modelType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vga</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cirrus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>none</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>bochs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ramfb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </video>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='mode'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>subsystem</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>mandatory</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>requisite</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>optional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pci</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hostdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <rng supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>random</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </rng>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='driverType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>path</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>handle</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </filesystem>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emulator</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>external</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>2.0</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </tpm>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </redirdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <channel supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pty</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>unix</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </channel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>qemu</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </crypto>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <interface supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>passt</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </interface>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <panic supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>isa</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>hyperv</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </panic>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <gic supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sev supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='features'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>relaxed</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vapic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vpindex</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>runtime</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>synic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>stimer</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reset</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>frequencies</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ipi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>avic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hyperv>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.161 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <domain>kvm</domain>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <arch>x86_64</arch>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <vcpu max='4096'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <iothreads supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <os supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='firmware'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>efi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <loader supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>rom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pflash</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='readonly'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>yes</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='secure'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>yes</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>no</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </loader>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </os>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='maximumMigratable'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>on</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>off</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <vendor>AMD</vendor>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='succor'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <mode name='custom' supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Denverton-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='auto-ibrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amd-psfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='stibp-always-on'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-128'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-256'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx10-512'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='prefetchiti'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Haswell-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512er'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512pf'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fma4'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tbm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xop'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='amx-tile'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-bf16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-fp16'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bitalg'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrc'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fzrm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='la57'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='taa-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='xfd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ifma'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cmpccxadd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fbsdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='fsrs'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ibrs-all'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mcdt-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='pbrsb-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='psdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='serialize'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='hle'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='rtm'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512bw'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512cd'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512dq'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512f'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='avx512vl'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='mpx'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='core-capability'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='split-lock-detect'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='cldemote'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='gfni'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdir64b'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='movdiri'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='athlon-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='core2duo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='coreduo-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='n270-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='ss'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <blockers model='phenom-v1'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnow'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <feature name='3dnowext'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </blockers>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </mode>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </cpu>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <memoryBacking supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <enum name='sourceType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>file</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>anonymous</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <value>memfd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </memoryBacking>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <disk supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='diskDevice'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>disk</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cdrom</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>floppy</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>lun</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>fdc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>sata</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </disk>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <graphics supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vnc</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egl-headless</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>dbus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </graphics>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <video supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='modelType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vga</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>cirrus</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>none</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>bochs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ramfb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </video>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hostdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='mode'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>subsystem</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='startupPolicy'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>mandatory</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>requisite</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>optional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='subsysType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pci</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>scsi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='capsType'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='pciBackend'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hostdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <rng supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtio-non-transitional</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>random</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>egd</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </rng>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <filesystem supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='driverType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>path</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>handle</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>virtiofs</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </filesystem>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <tpm supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-tis</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tpm-crb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emulator</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>external</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendVersion'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>2.0</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </tpm>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <redirdev supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='bus'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>usb</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </redirdev>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <channel supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>pty</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>unix</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </channel>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <crypto supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='type'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>qemu</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendModel'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>builtin</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </crypto>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <interface supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='backendType'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>default</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>passt</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </interface>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <panic supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='model'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>isa</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>hyperv</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </panic>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </devices>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  <features>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <gic supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <genid supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <backup supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <async-teardown supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <ps2 supported='yes'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sev supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <sgx supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <hyperv supported='yes'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      <enum name='features'>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>relaxed</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vapic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>spinlocks</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vpindex</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>runtime</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>synic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>stimer</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reset</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>vendor_id</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>frequencies</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>reenlightenment</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>tlbflush</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>ipi</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>avic</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>emsr_bitmap</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:        <value>xmm_input</value>
Oct  9 09:50:43 compute-2 nova_compute[162967]:      </enum>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    </hyperv>
Oct  9 09:50:43 compute-2 nova_compute[162967]:    <launchSecurity supported='no'/>
Oct  9 09:50:43 compute-2 nova_compute[162967]:  </features>
Oct  9 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct  9 09:50:43 compute-2 nova_compute[162967]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.196 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 INFO nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Secure Boot support detected#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.198 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.198 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.206 2 DEBUG nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.222 2 INFO nova.virt.node [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.238 2 WARNING nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Compute nodes ['41a86af9-054a-49c9-9d2e-f0396c1c31a8'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.290 2 INFO nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 WARNING nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG nova.compute.resource_tracker [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.309 2 DEBUG oslo_concurrency.processutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:43 compute-2 python3.9[163850]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  9 09:50:43 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:43 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2933965568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.652 2 DEBUG oslo_concurrency.processutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:43 compute-2 systemd[1]: Stopping nova_compute container...
Oct  9 09:50:43 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Oct  9 09:50:43 compute-2 systemd[1]: Started libvirt nodedev daemon.
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:43 compute-2 podman[163877]: 2025-10-09 09:50:43.721267578 +0000 UTC m=+0.037459335 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:44 compute-2 virtqemud[163507]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  9 09:50:44 compute-2 virtqemud[163507]: hostname: compute-2
Oct  9 09:50:44 compute-2 virtqemud[163507]: End of file while reading data: Input/output error
Oct  9 09:50:44 compute-2 systemd[1]: libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope: Deactivated successfully.
Oct  9 09:50:44 compute-2 systemd[1]: libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope: Consumed 2.822s CPU time.
Oct  9 09:50:44 compute-2 conmon[162967]: conmon b4db2b5c58032a2a0063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope/container/memory.events
Oct  9 09:50:44 compute-2 podman[163876]: 2025-10-09 09:50:44.090420097 +0000 UTC m=+0.407844087 container died b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  9 09:50:44 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b-merged.mount: Deactivated successfully.
Oct  9 09:50:44 compute-2 podman[163876]: 2025-10-09 09:50:44.155118433 +0000 UTC m=+0.472542423 container cleanup b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Oct  9 09:50:44 compute-2 podman[163876]: nova_compute
Oct  9 09:50:44 compute-2 podman[163940]: nova_compute
Oct  9 09:50:44 compute-2 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  9 09:50:44 compute-2 systemd[1]: Stopped nova_compute container.
Oct  9 09:50:44 compute-2 systemd[1]: Starting nova_compute container...
Oct  9 09:50:44 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:44 compute-2 podman[163949]: 2025-10-09 09:50:44.283705909 +0000 UTC m=+0.065004755 container init b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 09:50:44 compute-2 podman[163949]: 2025-10-09 09:50:44.288404915 +0000 UTC m=+0.069703751 container start b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 09:50:44 compute-2 podman[163949]: nova_compute
Oct  9 09:50:44 compute-2 nova_compute[163961]: + sudo -E kolla_set_configs
Oct  9 09:50:44 compute-2 systemd[1]: Started nova_compute container.
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Validating config file
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying service configuration files
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/ceph
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Creating directory /etc/ceph
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Writing out command to execute
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  9 09:50:44 compute-2 nova_compute[163961]: ++ cat /run_command
Oct  9 09:50:44 compute-2 nova_compute[163961]: + CMD=nova-compute
Oct  9 09:50:44 compute-2 nova_compute[163961]: + ARGS=
Oct  9 09:50:44 compute-2 nova_compute[163961]: + sudo kolla_copy_cacerts
Oct  9 09:50:44 compute-2 nova_compute[163961]: + [[ ! -n '' ]]
Oct  9 09:50:44 compute-2 nova_compute[163961]: + . kolla_extend_start
Oct  9 09:50:44 compute-2 nova_compute[163961]: Running command: 'nova-compute'
Oct  9 09:50:44 compute-2 nova_compute[163961]: + echo 'Running command: '\''nova-compute'\'''
Oct  9 09:50:44 compute-2 nova_compute[163961]: + umask 0022
Oct  9 09:50:44 compute-2 nova_compute[163961]: + exec nova-compute
Oct  9 09:50:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:44 compute-2 python3.9[164124]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  9 09:50:45 compute-2 systemd[1]: Started libpod-conmon-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope.
Oct  9 09:50:45 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  9 09:50:45 compute-2 podman[164144]: 2025-10-09 09:50:45.054169982 +0000 UTC m=+0.076052176 container init 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:50:45 compute-2 podman[164144]: 2025-10-09 09:50:45.060247196 +0000 UTC m=+0.082129370 container start 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct  9 09:50:45 compute-2 python3.9[164124]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Applying nova statedir ownership
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  9 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Nova statedir ownership complete
Oct  9 09:50:45 compute-2 systemd[1]: libpod-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope: Deactivated successfully.
Oct  9 09:50:45 compute-2 podman[164182]: 2025-10-09 09:50:45.142979894 +0000 UTC m=+0.020396335 container died 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  9 09:50:45 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc-userdata-shm.mount: Deactivated successfully.
Oct  9 09:50:45 compute-2 systemd[1]: var-lib-containers-storage-overlay-196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10-merged.mount: Deactivated successfully.
Oct  9 09:50:45 compute-2 podman[164182]: 2025-10-09 09:50:45.17316611 +0000 UTC m=+0.050582530 container cleanup 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct  9 09:50:45 compute-2 systemd[1]: libpod-conmon-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope: Deactivated successfully.
Oct  9 09:50:45 compute-2 podman[164176]: 2025-10-09 09:50:45.209729324 +0000 UTC m=+0.081758031 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:50:45 compute-2 systemd[1]: session-37.scope: Deactivated successfully.
Oct  9 09:50:45 compute-2 systemd[1]: session-37.scope: Consumed 1min 57.435s CPU time.
Oct  9 09:50:45 compute-2 systemd-logind[800]: Session 37 logged out. Waiting for processes to exit.
Oct  9 09:50:45 compute-2 systemd-logind[800]: Removed session 37.
Oct  9 09:50:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.028 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.153 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.163 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.563 2 INFO nova.virt.driver [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.648 2 INFO nova.compute.provider_config [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 WARNING oslo_config.cfg [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  9 09:50:46 compute-2 nova_compute[163961]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  9 09:50:46 compute-2 nova_compute[163961]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  9 09:50:46 compute-2 nova_compute[163961]: and ``live_migration_inbound_addr`` respectively.
Oct  9 09:50:46 compute-2 nova_compute[163961]: ).  Its value may be silently ignored in the future.#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.785 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.795 2 INFO nova.virt.node [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.795 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.804 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f32e4a6a5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.806 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f32e4a6a5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.806 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.810 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host capabilities <capabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]: 
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <host>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <uuid>ed712924-75ec-452a-a842-ae61b9b9ed0c</uuid>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <arch>x86_64</arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <microcode version='167776725'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <signature family='25' model='1' stepping='1'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <maxphysaddr mode='emulate' bits='48'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='x2apic'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='tsc-deadline'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='osxsave'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='hypervisor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='tsc_adjust'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='ospke'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='vaes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='vpclmulqdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='spec-ctrl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='stibp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='arch-capabilities'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='cmp_legacy'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='virt-ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='lbrv'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='tsc-scale'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='vmcb-clean'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='pause-filter'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='pfthreshold'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='vgif'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='rdctl-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='mds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature name='pschange-mc-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <pages unit='KiB' size='4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <pages unit='KiB' size='2048'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <pages unit='KiB' size='1048576'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <power_management>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <suspend_mem/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </power_management>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <iommu support='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <migration_features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <live/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <uri_transports>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <uri_transport>tcp</uri_transport>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <uri_transport>rdma</uri_transport>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </uri_transports>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </migration_features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <topology>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <cells num='1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <cell id='0'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <memory unit='KiB'>7865152</memory>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <pages unit='KiB' size='4'>1966288</pages>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <pages unit='KiB' size='2048'>0</pages>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <distances>
Oct  9 09:50:46 compute-2 nova_compute[163961]:            <sibling id='0' value='10'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          </distances>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          <cpus num='4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:          </cpus>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        </cell>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </cells>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </topology>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <cache>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </cache>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <secmodel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model>selinux</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <doi>0</doi>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </secmodel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <secmodel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model>dac</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <doi>0</doi>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </secmodel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </host>
Oct  9 09:50:46 compute-2 nova_compute[163961]: 
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <guest>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <arch name='i686'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <wordsize>32</wordsize>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <pae/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <nonpae/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <cpuselection/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <deviceboot/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <externalSnapshot/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </guest>
Oct  9 09:50:46 compute-2 nova_compute[163961]: 
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <guest>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <os_type>hvm</os_type>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <arch name='x86_64'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <wordsize>64</wordsize>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <domain type='qemu'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <domain type='kvm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <acpi default='on' toggle='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <apic default='on' toggle='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <cpuselection/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <deviceboot/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <disksnapshot default='on' toggle='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <externalSnapshot/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </guest>
Oct  9 09:50:46 compute-2 nova_compute[163961]: 
Oct  9 09:50:46 compute-2 nova_compute[163961]: </capabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]: #033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.814 2 DEBUG nova.virt.libvirt.volume.mount [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.815 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.820 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  9 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <arch>i686</arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <vcpu max='4096'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <os supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <loader supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>rom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pflash</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='readonly'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>yes</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='secure'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </loader>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </os>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>file</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>anonymous</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>memfd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </memoryBacking>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <disk supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>disk</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cdrom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>floppy</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>lun</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>fdc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>sata</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vnc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>dbus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </graphics>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <video supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='modelType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vga</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cirrus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>none</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>bochs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ramfb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </video>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='mode'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>subsystem</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>mandatory</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>requisite</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>optional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pci</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hostdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <rng supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>random</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </rng>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='driverType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>path</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>handle</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </filesystem>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emulator</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>external</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>2.0</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </tpm>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </redirdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <channel supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pty</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>unix</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </channel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>qemu</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </crypto>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <interface supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>passt</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </interface>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <panic supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>isa</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>hyperv</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </panic>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <gic supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sev supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='features'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>relaxed</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vapic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vpindex</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>runtime</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>synic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>stimer</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reset</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>frequencies</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ipi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>avic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hyperv>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </features>
Oct  9 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.824 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  9 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <arch>i686</arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <vcpu max='240'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <os supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <loader supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>rom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pflash</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='readonly'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>yes</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='secure'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </loader>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </os>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>file</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>anonymous</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>memfd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </memoryBacking>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <disk supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>disk</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cdrom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>floppy</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>lun</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ide</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>fdc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>sata</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vnc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>dbus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </graphics>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <video supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='modelType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vga</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cirrus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>none</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>bochs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ramfb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </video>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='mode'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>subsystem</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>mandatory</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>requisite</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>optional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pci</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hostdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <rng supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>random</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </rng>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='driverType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>path</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>handle</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </filesystem>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emulator</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>external</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>2.0</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </tpm>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </redirdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <channel supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pty</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>unix</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </channel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>qemu</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </crypto>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <interface supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>passt</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </interface>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <panic supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>isa</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>hyperv</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </panic>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <gic supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sev supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='features'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>relaxed</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vapic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vpindex</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>runtime</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>synic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>stimer</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reset</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>frequencies</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ipi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>avic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hyperv>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </features>
Oct  9 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.826 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.828 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  9 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <arch>x86_64</arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <vcpu max='4096'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <os supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='firmware'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>efi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <loader supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>rom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pflash</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='readonly'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>yes</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='secure'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>yes</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </loader>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </os>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='athlon-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='core2duo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='coreduo-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='n270-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='phenom-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <memoryBacking supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='sourceType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>file</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>anonymous</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>memfd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </memoryBacking>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <disk supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='diskDevice'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>disk</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cdrom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>floppy</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>lun</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>fdc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>sata</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <graphics supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vnc</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egl-headless</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>dbus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </graphics>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <video supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='modelType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vga</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>cirrus</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>none</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>bochs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ramfb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </video>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hostdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='mode'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>subsystem</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='startupPolicy'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>mandatory</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>requisite</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>optional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='subsysType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pci</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='capsType'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='pciBackend'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hostdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <rng supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>random</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>egd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </rng>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <filesystem supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='driverType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>path</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>handle</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>virtiofs</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </filesystem>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <tpm supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-tis</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tpm-crb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emulator</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>external</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendVersion'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>2.0</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </tpm>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <redirdev supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </redirdev>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <channel supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pty</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>unix</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </channel>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <crypto supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>qemu</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </crypto>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <interface supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='backendType'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>passt</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </interface>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <panic supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>isa</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>hyperv</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </panic>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </devices>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <features>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <gic supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <genid supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <backup supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <async-teardown supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <ps2 supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sev supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <sgx supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <hyperv supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='features'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>relaxed</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vapic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>spinlocks</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vpindex</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>runtime</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>synic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>stimer</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reset</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>vendor_id</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>frequencies</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>reenlightenment</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>tlbflush</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>ipi</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>avic</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>emsr_bitmap</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>xmm_input</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </hyperv>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <launchSecurity supported='no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </features>
Oct  9 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.880 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  9 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <path>/usr/libexec/qemu-kvm</path>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <domain>kvm</domain>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <arch>x86_64</arch>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <vcpu max='240'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <iothreads supported='yes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <os supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <enum name='firmware'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <loader supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>rom</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>pflash</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='readonly'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>yes</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='secure'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>no</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </loader>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  </os>
Oct  9 09:50:46 compute-2 nova_compute[163961]:  <cpu>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-passthrough' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='hostPassthroughMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='maximum' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <enum name='maximumMigratable'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>on</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <value>off</value>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='host-model' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model fallback='forbid'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <vendor>AMD</vendor>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <maxphysaddr mode='passthrough' limit='48'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='x2apic'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-deadline'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='hypervisor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc_adjust'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vaes'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vpclmulqdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='spec-ctrl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='stibp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='arch-capabilities'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='cmp_legacy'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='overflow-recov'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='succor'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='virt-ssbd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lbrv'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='tsc-scale'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vmcb-clean'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='flushbyasid'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pause-filter'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pfthreshold'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='v-vmsave-vmload'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='vgif'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rdctl-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='mds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='pschange-mc-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='gds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <feature policy='require' name='rfds-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:46 compute-2 nova_compute[163961]:    <mode name='custom' supported='yes'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Broadwell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cascadelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Cooperlake-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Denverton-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Genoa-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='auto-ibrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='EPYC-Milan-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amd-psfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='no-nested-data-bp'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='null-sel-clr-base'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='stibp-always-on'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='GraniteRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-128'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-256'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx10-512'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='prefetchiti'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-IBRS'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Haswell-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-noTSX'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v3'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v6'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Icelake-Server-v7'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='KnightsMill-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4fmaps'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-4vnniw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512er'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512pf'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G4-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='Opteron_G5-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fma4'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tbm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xop'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v1'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  9 09:50:46 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v2'>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:46 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='SapphireRapids-v3'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='amx-bf16'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='amx-int8'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='amx-tile'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512-bf16'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512-fp16'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512-vpopcntdq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bitalg'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512ifma'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vbmi'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vbmi2'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vnni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fsrc'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fzrm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='la57'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='taa-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='tsx-ldtrk'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='xfd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='SierraForest'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='SierraForest-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-ifma'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-ne-convert'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-vnni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx-vnni-int8'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='bus-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cmpccxadd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fbsdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='fsrs'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ibrs-all'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='mcdt-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='pbrsb-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='psdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='sbdr-ssdp-no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='serialize'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-IBRS'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Client-v2'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-IBRS'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v2'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='hle'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='rtm'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v3'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v4'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Skylake-Server-v5'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512bw'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512cd'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512dq'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512f'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='avx512vl'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Snowridge'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='mpx'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v2'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v3'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='core-capability'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='split-lock-detect'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='Snowridge-v4'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='cldemote'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='gfni'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdir64b'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='movdiri'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='athlon'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='athlon-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='core2duo'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='core2duo-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='coreduo'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='coreduo-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='n270'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='n270-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='ss'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='phenom'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <blockers model='phenom-v1'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnow'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <feature name='3dnowext'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </blockers>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </mode>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  </cpu>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  <memoryBacking supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <enum name='sourceType'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <value>file</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <value>anonymous</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <value>memfd</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  </memoryBacking>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  <devices>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <disk supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='diskDevice'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>disk</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>cdrom</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>floppy</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>lun</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>ide</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>fdc</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>sata</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <graphics supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>vnc</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>egl-headless</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>dbus</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </graphics>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <video supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='modelType'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>vga</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>cirrus</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>none</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>bochs</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>ramfb</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </video>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <hostdev supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='mode'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>subsystem</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='startupPolicy'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>mandatory</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>requisite</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>optional</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='subsysType'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>pci</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>scsi</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='capsType'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='pciBackend'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </hostdev>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <rng supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio-transitional</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtio-non-transitional</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>random</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>egd</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </rng>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <filesystem supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='driverType'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>path</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>handle</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>virtiofs</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </filesystem>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <tpm supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>tpm-tis</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>tpm-crb</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>emulator</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>external</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='backendVersion'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>2.0</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </tpm>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <redirdev supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='bus'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>usb</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </redirdev>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <channel supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>pty</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>unix</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </channel>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <crypto supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='model'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='type'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>qemu</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='backendModel'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>builtin</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </crypto>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <interface supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='backendType'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>default</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>passt</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </interface>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <panic supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='model'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>isa</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>hyperv</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </panic>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  </devices>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  <features>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <gic supported='no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <vmcoreinfo supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <genid supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <backingStoreInput supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <backup supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <async-teardown supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <ps2 supported='yes'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <sev supported='no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <sgx supported='no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <hyperv supported='yes'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      <enum name='features'>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>relaxed</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>vapic</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>spinlocks</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>vpindex</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>runtime</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>synic</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>stimer</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>reset</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>vendor_id</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>frequencies</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>reenlightenment</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>tlbflush</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>ipi</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>avic</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>emsr_bitmap</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:        <value>xmm_input</value>
Oct  9 09:50:47 compute-2 nova_compute[163961]:      </enum>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    </hyperv>
Oct  9 09:50:47 compute-2 nova_compute[163961]:    <launchSecurity supported='no'/>
Oct  9 09:50:47 compute-2 nova_compute[163961]:  </features>
Oct  9 09:50:47 compute-2 nova_compute[163961]: </domainCapabilities>
Oct  9 09:50:47 compute-2 nova_compute[163961]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.920 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.921 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Secure Boot support detected#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.922 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.922 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.927 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.957 2 INFO nova.virt.node [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.968 2 WARNING nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute nodes ['41a86af9-054a-49c9-9d2e-f0396c1c31a8'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.986 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.995 2 WARNING nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.995 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/173428441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.345 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.542 2 WARNING nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5270MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.560 2 WARNING nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] No compute node record for compute-2.ctlplane.example.com:41a86af9-054a-49c9-9d2e-f0396c1c31a8: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 41a86af9-054a-49c9-9d2e-f0396c1c31a8 could not be found.#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.602 2 INFO nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 41a86af9-054a-49c9-9d2e-f0396c1c31a8#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.673 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.674 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:50:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:47.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.135 2 INFO nova.scheduler.client.report [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [req-152ce9ff-fe8e-46b9-be29-36d3362c5e96] Created resource provider record via placement API for resource provider with UUID 41a86af9-054a-49c9-9d2e-f0396c1c31a8 and name compute-2.ctlplane.example.com.#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.187 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:50:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:50:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2871433844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.523 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.527 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  9 09:50:48 compute-2 nova_compute[163961]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.527 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.528 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.528 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.598 2 DEBUG nova.scheduler.client.report [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updated inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.598 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.599 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.676 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG nova.service [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.736 2 DEBUG nova.service [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  9 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.736 2 DEBUG nova.servicegroup.drivers.db [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  9 09:50:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:48.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:51.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:52.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:53.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:54.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:55 compute-2 podman[164341]: 2025-10-09 09:50:55.225726305 +0000 UTC m=+0.060248469 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct  9 09:50:55 compute-2 systemd[1]: Stopping User Manager for UID 1000...
Oct  9 09:50:55 compute-2 systemd[1270]: Activating special unit Exit the Session...
Oct  9 09:50:55 compute-2 systemd[1270]: Removed slice User Background Tasks Slice.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped target Main User Target.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped target Basic System.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped target Paths.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped target Sockets.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped target Timers.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 09:50:55 compute-2 systemd[1270]: Closed D-Bus User Message Bus Socket.
Oct  9 09:50:55 compute-2 systemd[1270]: Stopped Create User's Volatile Files and Directories.
Oct  9 09:50:55 compute-2 systemd[1270]: Removed slice User Application Slice.
Oct  9 09:50:55 compute-2 systemd[1270]: Reached target Shutdown.
Oct  9 09:50:55 compute-2 systemd[1270]: Finished Exit the Session.
Oct  9 09:50:55 compute-2 systemd[1270]: Reached target Exit the Session.
Oct  9 09:50:55 compute-2 systemd[1]: user@1000.service: Deactivated successfully.
Oct  9 09:50:55 compute-2 systemd[1]: Stopped User Manager for UID 1000.
Oct  9 09:50:55 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct  9 09:50:55 compute-2 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct  9 09:50:55 compute-2 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct  9 09:50:55 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct  9 09:50:55 compute-2 systemd[1]: Removed slice User Slice of UID 1000.
Oct  9 09:50:55 compute-2 systemd[1]: user-1000.slice: Consumed 8min 13.727s CPU time.
Oct  9 09:50:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:55.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:57.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:50:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:50:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:50:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:50:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:50:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:50:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:50:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:59.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:04 compute-2 podman[164376]: 2025-10-09 09:51:04.20032549 +0000 UTC m=+0.035375406 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  9 09:51:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:06.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:08.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:51:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:51:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:51:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:10.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:51:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:51:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:10.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:14 compute-2 podman[164428]: 2025-10-09 09:51:14.210757825 +0000 UTC m=+0.037590483 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  9 09:51:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:16 compute-2 podman[164447]: 2025-10-09 09:51:16.206262111 +0000 UTC m=+0.037808544 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:51:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:51:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:26 compute-2 podman[164473]: 2025-10-09 09:51:26.234445603 +0000 UTC m=+0.064391037 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:28.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:51:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:51:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:51:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:51:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:51:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:51:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.232303) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491232329, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 761, "num_deletes": 250, "total_data_size": 1519760, "memory_usage": 1545464, "flush_reason": "Manual Compaction"}
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491235044, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 675072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17914, "largest_seqno": 18670, "table_properties": {"data_size": 671902, "index_size": 1014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8252, "raw_average_key_size": 20, "raw_value_size": 665286, "raw_average_value_size": 1614, "num_data_blocks": 44, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003438, "oldest_key_time": 1760003438, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2768 microseconds, and 1992 cpu microseconds.
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235070) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 675072 bytes OK
Oct  9 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235082) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct  9 09:52:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:04 compute-2 rsyslogd[1245]: imjournal: 312 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  9 09:52:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:06 compute-2 podman[164808]: 2025-10-09 09:52:06.208293181 +0000 UTC m=+0.038682432 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct  9 09:52:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:10.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:16.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:16 compute-2 podman[164860]: 2025-10-09 09:52:16.202311678 +0000 UTC m=+0.035555095 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  9 09:52:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:18 compute-2 podman[164878]: 2025-10-09 09:52:18.201285802 +0000 UTC m=+0.037124426 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  9 09:52:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:22.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:52:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:22.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:52:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:24.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:26.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:28 compute-2 podman[164905]: 2025-10-09 09:52:28.21954069 +0000 UTC m=+0.052486363 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 09:52:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:28.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:30.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:52:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:52:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:32.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:37 compute-2 podman[164963]: 2025-10-09 09:52:37.210596 +0000 UTC m=+0.045902968 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:52:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:38.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:39.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:40.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:41.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:42.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:45.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:47 compute-2 podman[164990]: 2025-10-09 09:52:47.20350789 +0000 UTC m=+0.039013433 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.453 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.454 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.467 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.468 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.468 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.475 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.477 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/216110203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.826 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.011 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.011 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5376MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.012 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.012 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.057 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.057 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.071 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:52:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:48.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:52:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3056215677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.412 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.415 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.427 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.429 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.429 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:52:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:49 compute-2 nova_compute[163961]: 2025-10-09 09:52:49.124 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:49 compute-2 nova_compute[163961]: 2025-10-09 09:52:49.124 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:52:49 compute-2 podman[165131]: 2025-10-09 09:52:49.204029598 +0000 UTC m=+0.038895831 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  9 09:52:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:50.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:52.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:52:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:52:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:53.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:52:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:54.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:52:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:55.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:52:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:52:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:57.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:58.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:52:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:52:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:59.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:52:59 compute-2 podman[165208]: 2025-10-09 09:52:59.218875526 +0000 UTC m=+0.051291228 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:52:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:52:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:52:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:01.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.047 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.048 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.049 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:53:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:04.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:06.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:08.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:08 compute-2 podman[165243]: 2025-10-09 09:53:08.209488618 +0000 UTC m=+0.040753209 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:53:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.271 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.271 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.272 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:53:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:53:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:53:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:53:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:13.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:16.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.262049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596262073, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1318, "num_deletes": 256, "total_data_size": 3199294, "memory_usage": 3247336, "flush_reason": "Manual Compaction"}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596267520, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2067926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18676, "largest_seqno": 19988, "table_properties": {"data_size": 2062313, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11873, "raw_average_key_size": 18, "raw_value_size": 2050859, "raw_average_value_size": 3270, "num_data_blocks": 132, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003491, "oldest_key_time": 1760003491, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5495 microseconds, and 3668 cpu microseconds.
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267545) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2067926 bytes OK
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267555) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268151) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268162) EVENT_LOG_v1 {"time_micros": 1760003596268159, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3193004, prev total WAL file size 3193004, number of live WAL files 2.
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2019KB)], [33(11MB)]
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596268680, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14226715, "oldest_snapshot_seqno": -1}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5015 keys, 13754866 bytes, temperature: kUnknown
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596306053, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13754866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13719680, "index_size": 21572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126900, "raw_average_key_size": 25, "raw_value_size": 13626768, "raw_average_value_size": 2717, "num_data_blocks": 890, "num_entries": 5015, "num_filter_entries": 5015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306325) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13754866 bytes
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 378.9 rd, 366.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5541, records dropped: 526 output_compression: NoCompression
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306814) EVENT_LOG_v1 {"time_micros": 1760003596306808, "job": 18, "event": "compaction_finished", "compaction_time_micros": 37546, "compaction_time_cpu_micros": 19238, "output_level": 6, "num_output_files": 1, "total_output_size": 13754866, "num_input_records": 5541, "num_output_records": 5015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596307535, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596309248, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:53:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:18 compute-2 podman[165296]: 2025-10-09 09:53:18.205361016 +0000 UTC m=+0.038153522 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct  9 09:53:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:19.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:20 compute-2 podman[165314]: 2025-10-09 09:53:20.203986044 +0000 UTC m=+0.036665442 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:21.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:25.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:27.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:28.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:29.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:30.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:30 compute-2 podman[165341]: 2025-10-09 09:53:30.219365123 +0000 UTC m=+0.055439120 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:34.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:53:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:53:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:36.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:39 compute-2 podman[165399]: 2025-10-09 09:53:39.209896951 +0000 UTC m=+0.045572106 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  9 09:53:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:53:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:53:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:53:46 compute-2 nova_compute[163961]: 2025-10-09 09:53:46.175 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:47 compute-2 nova_compute[163961]: 2025-10-09 09:53:47.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.170 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.192 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:49 compute-2 podman[165426]: 2025-10-09 09:53:49.223091843 +0000 UTC m=+0.046036042 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  9 09:53:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.548 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.727 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.728 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5376MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.729 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.729 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.774 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.775 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.790 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:53:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:53:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1008308126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.127 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.130 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.148 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.149 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.150 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:53:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:51 compute-2 nova_compute[163961]: 2025-10-09 09:53:51.150 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:53:51 compute-2 podman[165512]: 2025-10-09 09:53:51.177490037 +0000 UTC m=+0.039414257 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 09:53:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:52.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:53:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:56.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:53:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:53:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:53:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:53:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:53:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:53:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:53:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:53:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:01 compute-2 podman[165643]: 2025-10-09 09:54:01.218424397 +0000 UTC m=+0.052992125 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 09:54:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:07.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:09.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.714162) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649714239, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 251, "total_data_size": 1569754, "memory_usage": 1595368, "flush_reason": "Manual Compaction"}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649719000, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1032670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19993, "largest_seqno": 20780, "table_properties": {"data_size": 1028915, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8578, "raw_average_key_size": 19, "raw_value_size": 1021376, "raw_average_value_size": 2316, "num_data_blocks": 68, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003597, "oldest_key_time": 1760003597, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4882 microseconds, and 3829 cpu microseconds.
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719047) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1032670 bytes OK
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719068) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719467) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719480) EVENT_LOG_v1 {"time_micros": 1760003649719477, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1565615, prev total WAL file size 1565615, number of live WAL files 2.
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1008KB)], [36(13MB)]
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649719963, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14787536, "oldest_snapshot_seqno": -1}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4940 keys, 12621288 bytes, temperature: kUnknown
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649752994, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12621288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12587595, "index_size": 20271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125966, "raw_average_key_size": 25, "raw_value_size": 12496861, "raw_average_value_size": 2529, "num_data_blocks": 833, "num_entries": 4940, "num_filter_entries": 4940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753208) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12621288 bytes
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 446.6 rd, 381.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(26.5) write-amplify(12.2) OK, records in: 5456, records dropped: 516 output_compression: NoCompression
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753767) EVENT_LOG_v1 {"time_micros": 1760003649753760, "job": 20, "event": "compaction_finished", "compaction_time_micros": 33115, "compaction_time_cpu_micros": 17611, "output_level": 6, "num_output_files": 1, "total_output_size": 12621288, "num_input_records": 5456, "num_output_records": 4940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649754111, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649756050, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:54:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:10 compute-2 podman[165674]: 2025-10-09 09:54:10.208285222 +0000 UTC m=+0.042716052 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:11.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:13.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:15.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:17.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:19.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:20 compute-2 podman[165726]: 2025-10-09 09:54:20.202327297 +0000 UTC m=+0.038263888 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 09:54:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:21.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:22 compute-2 podman[165744]: 2025-10-09 09:54:22.210360778 +0000 UTC m=+0.041946932 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:23.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=404 latency=0.001000010s ======
Oct  9 09:54:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:28.653 +0000] "GET /healthcheck HTTP/1.1" 404 242 - "python-urllib3/1.26.5" - latency=0.001000010s
Oct  9 09:54:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:29.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:29 compute-2 systemd[1]: packagekit.service: Deactivated successfully.
Oct  9 09:54:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:30.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:31 compute-2 podman[165795]: 2025-10-09 09:54:31.304086657 +0000 UTC m=+0.051041989 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  9 09:54:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:32.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:32 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct  9 09:54:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:34.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct  9 09:54:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:35.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct  9 09:54:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:36.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:40.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:41.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:41 compute-2 podman[165829]: 2025-10-09 09:54:41.207925816 +0000 UTC m=+0.042561641 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  9 09:54:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct  9 09:54:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:54:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:42.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 09:54:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 09:54:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:54:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:49.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.197 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3185983185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.542 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.742 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.743 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5372MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.743 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.744 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.804 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.805 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.826 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:54:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4109511036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:54:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1194021554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.163 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.167 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.187 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:54:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.185 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-2 podman[165900]: 2025-10-09 09:54:51.196463239 +0000 UTC m=+0.033071176 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:54:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:52.051 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:54:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:52.052 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:54:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:54:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:52.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:53 compute-2 podman[165943]: 2025-10-09 09:54:53.214358297 +0000 UTC m=+0.041172840 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 09:54:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:54.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:54:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:54:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:54:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:54:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:54:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:59 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:59.054 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:54:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:54:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:54:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:54:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:54:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:54:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:01.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:55:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:55:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:55:02 compute-2 podman[166074]: 2025-10-09 09:55:02.272042746 +0000 UTC m=+0.100173536 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.274 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.274 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.275 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:11 compute-2 podman[166130]: 2025-10-09 09:55:11.484547862 +0000 UTC m=+0.067425970 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 09:55:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:12.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:55:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3970 writes, 21K keys, 3970 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s#012Cumulative WAL: 3970 writes, 3970 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1495 writes, 7150 keys, 1495 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s#012Interval WAL: 1495 writes, 1495 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    380.3      0.09              0.06        10    0.009       0      0       0.0       0.0#012  L6      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    387.6    328.9      0.35              0.18         9    0.039     42K   4799       0.0       0.0#012 Sum      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    311.3    339.0      0.43              0.24        19    0.023     42K   4799       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    292.8    297.6      0.21              0.10         8    0.026     22K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    387.6    328.9      0.35              0.18         9    0.039     42K   4799       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    384.3      0.08              0.06         9    0.009       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.4 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 8.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(477,7.69 MB,2.53081%) FilterBlock(19,127.80 KB,0.0410532%) IndexBlock(19,240.41 KB,0.0772275%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 09:55:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct  9 09:55:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:22 compute-2 podman[166160]: 2025-10-09 09:55:22.206321594 +0000 UTC m=+0.038777536 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  9 09:55:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct  9 09:55:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct  9 09:55:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:24 compute-2 podman[166179]: 2025-10-09 09:55:24.210909241 +0000 UTC m=+0.038466782 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  9 09:55:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:29.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:31 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:33 compute-2 podman[166230]: 2025-10-09 09:55:33.222273934 +0000 UTC m=+0.054393566 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:55:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:34.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:42 compute-2 podman[166262]: 2025-10-09 09:55:42.206292527 +0000 UTC m=+0.037925311 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:55:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.182 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.187 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.210 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.210 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1559129444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.546 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.736 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5372MB free_disk=59.94271469116211GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.841 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.841 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:55:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.890 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.915 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.915 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.959 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.975 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.985 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:55:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2594520052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:55:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/576738267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.322 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.325 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.338 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.339 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.339 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:55:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:53 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:53.198 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:55:53 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:53.199 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:55:53 compute-2 podman[166360]: 2025-10-09 09:55:53.232515268 +0000 UTC m=+0.063011874 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 09:55:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:55:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:55:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:55 compute-2 podman[166377]: 2025-10-09 09:55:55.201284378 +0000 UTC m=+0.037359268 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:55:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:55:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:55:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:55:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:55:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:55:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:55:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:00.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:56:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:02.200 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:56:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:04 compute-2 podman[166552]: 2025-10-09 09:56:04.218751082 +0000 UTC m=+0.054704016 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 09:56:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:04.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:06.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:56:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 09:56:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 6965 writes, 28K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6965 writes, 1430 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 985 writes, 2603 keys, 985 commit groups, 1.0 writes per commit group, ingest: 2.82 MB, 0.00 MB/s#012Interval WAL: 985 writes, 447 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:08.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.275 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:56:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:56:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:56:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:12.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:13 compute-2 podman[166635]: 2025-10-09 09:56:13.207328373 +0000 UTC m=+0.042078983 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 09:56:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct  9 09:56:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:24 compute-2 podman[166664]: 2025-10-09 09:56:24.20132848 +0000 UTC m=+0.034384081 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 09:56:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:26 compute-2 podman[166682]: 2025-10-09 09:56:26.21072156 +0000 UTC m=+0.047233848 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct  9 09:56:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:27.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:28.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:35 compute-2 podman[166733]: 2025-10-09 09:56:35.219030676 +0000 UTC m=+0.051273301 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 09:56:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:44 compute-2 podman[166766]: 2025-10-09 09:56:44.206362109 +0000 UTC m=+0.038866719 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:56:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:46.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:49 compute-2 nova_compute[163961]: 2025-10-09 09:56:49.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:49 compute-2 nova_compute[163961]: 2025-10-09 09:56:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3254798988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:50 compute-2 nova_compute[163961]: 2025-10-09 09:56:50.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:51 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1845698345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.533 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.736 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.737 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5354MB free_disk=59.967525482177734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.738 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.778 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.779 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.790 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:56:52 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1583333975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.137 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.140 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.150 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.151 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.151 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:56:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:52.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:54 compute-2 nova_compute[163961]: 2025-10-09 09:56:54.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:56:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:55 compute-2 podman[166863]: 2025-10-09 09:56:55.203512402 +0000 UTC m=+0.035988053 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  9 09:56:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:56:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:56:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:56.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:57 compute-2 podman[166881]: 2025-10-09 09:56:57.209342047 +0000 UTC m=+0.041808894 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  9 09:56:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:56:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:56:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:58.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:56:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:56:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:56:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:56:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:56:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:00.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:57:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:05 compute-2 podman[166931]: 2025-10-09 09:57:05.817586562 +0000 UTC m=+0.062204036 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:06 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:06.085 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:57:06 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:06.086 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:57:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:57:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.087 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:57:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:11.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:57:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:57:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:57:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:57:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:12.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:15 compute-2 podman[167068]: 2025-10-09 09:57:15.20738132 +0000 UTC m=+0.040826750 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  9 09:57:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:23.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:26 compute-2 podman[167097]: 2025-10-09 09:57:26.200396438 +0000 UTC m=+0.036814699 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 09:57:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:28 compute-2 podman[167114]: 2025-10-09 09:57:28.228419471 +0000 UTC m=+0.064420373 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct  9 09:57:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:57:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:57:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.357225) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855357244, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2398, "num_deletes": 251, "total_data_size": 6398910, "memory_usage": 6495200, "flush_reason": "Manual Compaction"}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855367040, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4153932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20785, "largest_seqno": 23178, "table_properties": {"data_size": 4144126, "index_size": 6236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 4124479, "raw_average_value_size": 4187, "num_data_blocks": 272, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003650, "oldest_key_time": 1760003650, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9841 microseconds, and 7092 cpu microseconds.
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367065) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4153932 bytes OK
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367077) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367555) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367566) EVENT_LOG_v1 {"time_micros": 1760003855367563, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6388365, prev total WAL file size 6388365, number of live WAL files 2.
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.368405) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4056KB)], [39(12MB)]
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855368431, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16775220, "oldest_snapshot_seqno": -1}
Oct  9 09:57:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5401 keys, 14600848 bytes, temperature: kUnknown
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855415357, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14600848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14562498, "index_size": 23776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136188, "raw_average_key_size": 25, "raw_value_size": 14462053, "raw_average_value_size": 2677, "num_data_blocks": 981, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.415518) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14600848 bytes
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 357.1 rd, 310.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5925, records dropped: 524 output_compression: NoCompression
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416036) EVENT_LOG_v1 {"time_micros": 1760003855416029, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46972, "compaction_time_cpu_micros": 21253, "output_level": 6, "num_output_files": 1, "total_output_size": 14600848, "num_input_records": 5925, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855416595, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855418110, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.368375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:36 compute-2 podman[167164]: 2025-10-09 09:57:36.219276584 +0000 UTC m=+0.050786760 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct  9 09:57:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:57:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:57:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:46 compute-2 podman[167198]: 2025-10-09 09:57:46.229373218 +0000 UTC m=+0.065468268 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct  9 09:57:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:47.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:50 compute-2 nova_compute[163961]: 2025-10-09 09:57:50.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3090346491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:51 compute-2 nova_compute[163961]: 2025-10-09 09:57:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:57:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:52.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:57:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/872108722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.533 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.723 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.724 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5346MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.724 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.725 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.769 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.770 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.782 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:57:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.117 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.120 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.129 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.130 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.130 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:57:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:54.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:57:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:57:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:57:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:57 compute-2 podman[167295]: 2025-10-09 09:57:57.197057461 +0000 UTC m=+0.033303983 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 09:57:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:57.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:57:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:57:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:59 compute-2 podman[167313]: 2025-10-09 09:57:59.202194514 +0000 UTC m=+0.034893288 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 09:57:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:57:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:57:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:59.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:57:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:57:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:57:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:00.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:02.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:04.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:06.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:07 compute-2 podman[167340]: 2025-10-09 09:58:07.233365079 +0000 UTC m=+0.064393752 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  9 09:58:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:08.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:08 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:08.859 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:58:08 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:08.860 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:58:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:58:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:13.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:15 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:15.861 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:58:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:17 compute-2 podman[167502]: 2025-10-09 09:58:17.21139485 +0000 UTC m=+0.041938204 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:58:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:58:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:58:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:25.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:27.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:28 compute-2 podman[167532]: 2025-10-09 09:58:28.207546146 +0000 UTC m=+0.035966597 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:58:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:28.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:30 compute-2 podman[167550]: 2025-10-09 09:58:30.20041986 +0000 UTC m=+0.036907791 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  9 09:58:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:33.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:38 compute-2 podman[167600]: 2025-10-09 09:58:38.223743301 +0000 UTC m=+0.059226552 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:58:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:44.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:46.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:48 compute-2 podman[167633]: 2025-10-09 09:58:48.209407844 +0000 UTC m=+0.045203980 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  9 09:58:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:48.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:50.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:51 compute-2 nova_compute[163961]: 2025-10-09 09:58:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:52 compute-2 nova_compute[163961]: 2025-10-09 09:58:52.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:52 compute-2 nova_compute[163961]: 2025-10-09 09:58:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3219886149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.527 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.715 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.716 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5340MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.716 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.717 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.756 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.769 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:58:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:58:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/912181250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.116 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.120 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.130 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.131 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.131 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:58:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.131 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.132 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.132 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:57 compute-2 nova_compute[163961]: 2025-10-09 09:58:57.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:58:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:58:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:58:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:58.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:59 compute-2 podman[167730]: 2025-10-09 09:58:59.202460159 +0000 UTC m=+0.034422937 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  9 09:58:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:58:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:58:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:58:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:58:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:58:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:00.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:00 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:01 compute-2 podman[167748]: 2025-10-09 09:59:01.226920302 +0000 UTC m=+0.048780361 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  9 09:59:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:01 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.314 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.314 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.324 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.374 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.375 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.415 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.416 2 INFO nova.compute.claims [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Claim successful on node compute-2.ctlplane.example.com#033[00m
Oct  9 09:59:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.485 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.836 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.839 2 DEBUG nova.compute.provider_tree [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.854 2 DEBUG nova.scheduler.client.report [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.870 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.871 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  9 09:59:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:02 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.921 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.921 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.937 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  9 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.952 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.064 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.065 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.065 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating image(s)#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.085 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.105 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.124 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.126 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.127 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.417 2 DEBUG nova.virt.libvirt.imagebackend [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image locations are: [{'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  9 09:59:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:03 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.014 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.059 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.060 2 DEBUG nova.virt.images [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] 9546778e-959c-466e-9bef-81ace5bd1cc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.061 2 DEBUG nova.privsep.utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.061 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.092 2 WARNING oslo_policy.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.093 2 WARNING oslo_policy.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.095 2 DEBUG nova.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.118 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.122 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.166 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.167 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.184 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.186 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7c7e2ca-e694-465f-941e-15513c7e91ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.358 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7c7e2ca-e694-465f-941e-15513c7e91ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.402 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  9 09:59:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.457 2 DEBUG nova.objects.instance [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.472 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.472 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Ensure instance console log exists: /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:04 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:05 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:06 compute-2 nova_compute[163961]: 2025-10-09 09:59:06.292 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully created port: 55484b13-541c-4895-beab-bdcdaa30f4fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:59:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:06 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.804 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully updated port: 55484b13-541c-4895-beab-bdcdaa30f4fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.816 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.817 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.817 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.879 2 DEBUG nova.compute.manager [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.880 2 DEBUG nova.compute.manager [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.880 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:07 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.925 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.470 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.483 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance network_info: |[{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.486 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start _get_guest_xml network_info=[{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.489 2 WARNING nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.492 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.492 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.498 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.498 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.503 2 DEBUG nova.privsep.utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.504 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:59:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2385607545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.845 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.862 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.865 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:08 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.174 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.175 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.189 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct  9 09:59:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2240393634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.213 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.214 2 DEBUG nova.virt.libvirt.vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:02Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.215 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.216 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.217 2 DEBUG nova.objects.instance [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:09 compute-2 podman[168033]: 2025-10-09 09:59:09.228110012 +0000 UTC m=+0.063713670 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.229 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] End _get_guest_xml xml=<domain type="kvm">
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <name>instance-00000006</name>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <memory>131072</memory>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <vcpu>1</vcpu>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <metadata>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:creationTime>2025-10-09 09:59:08</nova:creationTime>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:flavor name="m1.nano">
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:memory>128</nova:memory>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:disk>1</nova:disk>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:swap>0</nova:swap>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:vcpus>1</nova:vcpus>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </nova:flavor>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:owner>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </nova:owner>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <nova:ports>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct  9 09:59:09 compute-2 nova_compute[163961]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        </nova:port>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </nova:ports>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </nova:instance>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </metadata>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <sysinfo type="smbios">
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <system>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="manufacturer">RDO</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="product">OpenStack Compute</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="serial">c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="uuid">c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <entry name="family">Virtual Machine</entry>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </system>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </sysinfo>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <os>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <boot dev="hd"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <smbios mode="sysinfo"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </os>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <features>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <acpi/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <apic/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <vmcoreinfo/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </features>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <clock offset="utc">
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <timer name="pit" tickpolicy="delay"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <timer name="hpet" present="no"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </clock>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <cpu mode="host-model" match="exact">
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <topology sockets="1" cores="1" threads="1"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </cpu>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  <devices>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <disk type="network" device="disk">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <driver type="raw" cache="none"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <source protocol="rbd" name="vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk">
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </source>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <auth username="openstack">
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </auth>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <target dev="vda" bus="virtio"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <disk type="network" device="cdrom">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <driver type="raw" cache="none"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <source protocol="rbd" name="vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config">
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.100" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.102" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <host name="192.168.122.101" port="6789"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </source>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <auth username="openstack">
Oct  9 09:59:09 compute-2 nova_compute[163961]:        <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      </auth>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <target dev="sda" bus="sata"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </disk>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <interface type="ethernet">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <mac address="fa:16:3e:d9:1b:2f"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <model type="virtio"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <mtu size="1442"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <target dev="tap55484b13-54"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </interface>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <serial type="pty">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <log file="/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log" append="off"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </serial>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <video>
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <model type="virtio"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </video>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <input type="tablet" bus="usb"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <rng model="virtio">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <backend model="random">/dev/urandom</backend>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </rng>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="pci" model="pcie-root-port"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <controller type="usb" index="0"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    <memballoon model="virtio">
Oct  9 09:59:09 compute-2 nova_compute[163961]:      <stats period="10"/>
Oct  9 09:59:09 compute-2 nova_compute[163961]:    </memballoon>
Oct  9 09:59:09 compute-2 nova_compute[163961]:  </devices>
Oct  9 09:59:09 compute-2 nova_compute[163961]: </domain>
Oct  9 09:59:09 compute-2 nova_compute[163961]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Preparing to wait for external event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG nova.virt.libvirt.vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:02Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.232 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.232 2 DEBUG os_vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.259 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.259 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.272 2 INFO oslo.privsep.daemon [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpy23ocg_f/privsep.sock']#033[00m
Oct  9 09:59:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.820 2 INFO oslo.privsep.daemon [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.734 698 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.738 698 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.739 698 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  9 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.740 698 INFO oslo.privsep.daemon [-] privsep daemon running as pid 698#033[00m
Oct  9 09:59:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:09 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55484b13-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55484b13-54, col_values=(('external_ids', {'iface-id': '55484b13-541c-4895-beab-bdcdaa30f4fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:1b:2f', 'vm-uuid': 'c7c7e2ca-e694-465f-941e-15513c7e91ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.0772] manager: (tap55484b13-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.082 2 INFO os_vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54')#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:1b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.117 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Using config drive#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.133 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.318 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating config drive at /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.322 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63p3tami execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.443 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63p3tami" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.465 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.468 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.551 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.552 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deleting local config drive /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config because it was imported into RBD.#033[00m
Oct  9 09:59:10 compute-2 systemd[1]: Starting libvirt secret daemon...
Oct  9 09:59:10 compute-2 systemd[1]: Started libvirt secret daemon.
Oct  9 09:59:10 compute-2 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  9 09:59:10 compute-2 kernel: tap55484b13-54: entered promiscuous mode
Oct  9 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6328] manager: (tap55484b13-54): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct  9 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00027|binding|INFO|Claiming lport 55484b13-541c-4895-beab-bdcdaa30f4fe for this chassis.
Oct  9 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00028|binding|INFO|55484b13-541c-4895-beab-bdcdaa30f4fe: Claiming fa:16:3e:d9:1b:2f 10.100.0.6
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.645 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1b:2f 10.100.0.6'], port_security=['fa:16:3e:d9:1b:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72489230-c514-4cf9-bf1c-35e063204738', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=55484b13-541c-4895-beab-bdcdaa30f4fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.647 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 55484b13-541c-4895-beab-bdcdaa30f4fe in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 bound to our chassis#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.648 71793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab21f371-26e2-4c4f-bba0-3c44fb308723#033[00m
Oct  9 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.649 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_7t9jm42/privsep.sock']#033[00m
Oct  9 09:59:10 compute-2 systemd-udevd[168161]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6757] device (tap55484b13-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6765] device (tap55484b13-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:59:10 compute-2 systemd-machined[121527]: New machine qemu-1-instance-00000006.
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Oct  9 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00029|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe ovn-installed in OVS
Oct  9 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00030|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe up in Southbound
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:10 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.992 2 DEBUG nova.compute.manager [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.992 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG nova.compute.manager [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Processing event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.125 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.199 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.199 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_7t9jm42/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.113 168221 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.116 168221 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.119 168221 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.119 168221 INFO oslo.privsep.daemon [-] privsep daemon running as pid 168221#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.202 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[a67c1c37-b7e5-4cd6-80fd-712a2b9480b6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.442 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.442 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4415255, c7c7e2ca-e694-465f-941e-15513c7e91ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.443 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Started (Lifecycle Event)#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.445 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.452 2 INFO nova.virt.libvirt.driver [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance spawned successfully.#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.452 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.461 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.462 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.468 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.468 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.475 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.476 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4447737, c7c7e2ca-e694-465f-941e-15513c7e91ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.476 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Paused (Lifecycle Event)#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.501 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.503 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4453082, c7c7e2ca-e694-465f-941e-15513c7e91ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.503 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Resumed (Lifecycle Event)#033[00m
Oct  9 09:59:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.523 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.525 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.528 2 INFO nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 8.46 seconds to spawn the instance on the hypervisor.#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.529 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.547 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.577 2 INFO nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 9.23 seconds to build instance.#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.587 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.713 168221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.714 168221 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.714 168221 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 09:59:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 09:59:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 09:59:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 09:59:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:11 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.307 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa69bee-bb96-41d9-8ad7-fb2ae03345ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.308 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab21f371-21 in ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.310 168221 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab21f371-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.310 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[7a590b71-d8b8-4040-bb56-1ae69c852888]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.314 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eda36a-a2a1-457a-b8b7-11f1dac546a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.332 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[177f40f9-ac91-41fe-bbd9-ac1915933ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.346 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[699e104a-764f-49f5-b4bd-859a4c6102ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.349 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpelyc0ali/privsep.sock']#033[00m
Oct  9 09:59:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:12.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:12 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.955 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.956 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpelyc0ali/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.864 168262 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.867 168262 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.869 168262 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.869 168262 INFO oslo.privsep.daemon [-] privsep daemon running as pid 168262#033[00m
Oct  9 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.959 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[770165c7-d565-4eb6-9191-42f5e90ed1d8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.072 2 DEBUG nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.072 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.074 2 WARNING nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state active and task_state None.#033[00m
Oct  9 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:13 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.965 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[4bac90c4-67d8-4889-93e4-8fce1d92d142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:13 compute-2 NetworkManager[984]: <info>  [1760003953.9780] manager: (tapab21f371-20): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct  9 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.977 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[0afc0e28-461e-47c7-8f12-bc75926b94d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:13 compute-2 systemd-udevd[168273]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.012 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[c1062481-c188-437f-ad22-2fc7fdc3a860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.015 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[98658a6f-da66-46bd-a7dd-c090b6f29ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 NetworkManager[984]: <info>  [1760003954.0310] device (tapab21f371-20): carrier: link connected
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.034 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[9084ce1e-5065-4285-8968-753cc8e08eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.055 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[4de7eaf4-9a61-4865-8f1e-2e6a24a45d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab21f371-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:89:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163846, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 168284, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.064 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[167168ba-ff62-4fc6-9bd4-624d262d8cb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:895b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 163846, 'tstamp': 163846}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 168286, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.075 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[697fa369-ad2d-43bf-9221-4614a5bda2b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab21f371-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:89:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163846, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 168287, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.099 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[e1de30d1-a174-4374-bcf4-b28fdd37fe3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.143 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[e7237359-5a77-46c3-8ed2-ebc8e16acb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.145 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21f371-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.146 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.146 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab21f371-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:14 compute-2 NetworkManager[984]: <info>  [1760003954.1497] manager: (tapab21f371-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  9 09:59:14 compute-2 kernel: tapab21f371-20: entered promiscuous mode
Oct  9 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.157 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab21f371-20, col_values=(('external_ids', {'iface-id': '188102c6-f5ba-4733-92be-2659db7ae55a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:14 compute-2 ovn_controller[62794]: 2025-10-09T09:59:14Z|00031|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.162 71793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.163 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[045c9300-7082-436d-b82f-4abc4764b664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.164 71793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: global
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    log         /dev/log local0 debug
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    log-tag     haproxy-metadata-proxy-ab21f371-26e2-4c4f-bba0-3c44fb308723
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    user        root
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    group       root
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    maxconn     1024
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    pidfile     /var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    daemon
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: defaults
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    log global
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    mode http
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    option httplog
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    option dontlognull
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    option http-server-close
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    option forwardfor
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    retries                 3
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    timeout http-request    30s
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    timeout connect         30s
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    timeout client          32s
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    timeout server          32s
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    timeout http-keep-alive 30s
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: listen listener
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    bind 169.254.169.254:80
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]:    http-request add-header X-OVN-Network-ID ab21f371-26e2-4c4f-bba0-3c44fb308723
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.165 71793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'env', 'PROCESS_TAG=haproxy-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab21f371-26e2-4c4f-bba0-3c44fb308723.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.470290658 +0000 UTC m=+0.037607971 container create 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  9 09:59:14 compute-2 systemd[1]: Started libpod-conmon-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope.
Oct  9 09:59:14 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:59:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b78e34eb4bd48f615c51f92b1c60c1faa9ea89e7ed53520625d65534f9f4de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.524124013 +0000 UTC m=+0.091441336 container init 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  9 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.528622192 +0000 UTC m=+0.095939505 container start 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.450676386 +0000 UTC m=+0.017993719 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:59:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:14 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : New worker (168335) forked
Oct  9 09:59:14 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : Loading success.
Oct  9 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.567 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 09:59:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:14 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:15 compute-2 ovn_controller[62794]: 2025-10-09T09:59:15Z|00032|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0643] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0645] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0653] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0655] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0660] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0662] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0664] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0666] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:15 compute-2 ovn_controller[62794]: 2025-10-09T09:59:15Z|00033|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.252 2 DEBUG nova.compute.manager [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.253 2 DEBUG nova.compute.manager [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.253 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.254 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.254 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:59:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:15.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:15 compute-2 podman[168446]: 2025-10-09 09:59:15.651500603 +0000 UTC m=+0.039444863 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 09:59:15 compute-2 podman[168446]: 2025-10-09 09:59:15.736136741 +0000 UTC m=+0.124080991 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct  9 09:59:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:15 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:15 compute-2 podman[168526]: 2025-10-09 09:59:15.994984273 +0000 UTC m=+0.035101717 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:59:16 compute-2 podman[168526]: 2025-10-09 09:59:16.000520428 +0000 UTC m=+0.040637893 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 09:59:16 compute-2 podman[168628]: 2025-10-09 09:59:16.349270345 +0000 UTC m=+0.045667504 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:59:16 compute-2 podman[168628]: 2025-10-09 09:59:16.360056046 +0000 UTC m=+0.056453185 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 09:59:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:16.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:16 compute-2 podman[168680]: 2025-10-09 09:59:16.514009975 +0000 UTC m=+0.045222714 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct  9 09:59:16 compute-2 podman[168680]: 2025-10-09 09:59:16.520911905 +0000 UTC m=+0.052124644 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, io.buildah.version=1.28.2, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct  9 09:59:16 compute-2 podman[168722]: 2025-10-09 09:59:16.648876947 +0000 UTC m=+0.046081393 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 09:59:16 compute-2 podman[168722]: 2025-10-09 09:59:16.663103513 +0000 UTC m=+0.060307949 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct  9 09:59:16 compute-2 nova_compute[163961]: 2025-10-09 09:59:16.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:16 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.173 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.174 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.188 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:17.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 09:59:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:17 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:18 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:19 compute-2 podman[168860]: 2025-10-09 09:59:19.220362893 +0000 UTC m=+0.048738212 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct  9 09:59:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:19 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:20 compute-2 nova_compute[163961]: 2025-10-09 09:59:20.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:20 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:21 compute-2 nova_compute[163961]: 2025-10-09 09:59:21.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:21 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 09:59:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:22 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:23 compute-2 ovn_controller[62794]: 2025-10-09T09:59:23Z|00004|pinctrl(ovn_pinctrl1)|INFO|DHCPOFFER fa:16:3e:d9:1b:2f 10.100.0.6
Oct  9 09:59:23 compute-2 ovn_controller[62794]: 2025-10-09T09:59:23Z|00005|pinctrl(ovn_pinctrl1)|INFO|DHCPACK fa:16:3e:d9:1b:2f 10.100.0.6
Oct  9 09:59:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:23 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:23.571 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:23 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:24 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:25 compute-2 nova_compute[163961]: 2025-10-09 09:59:25.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:25 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:26.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:26 compute-2 nova_compute[163961]: 2025-10-09 09:59:26.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:26 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:27 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:28 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:29 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.531 2 INFO nova.compute.manager [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Get console output#033[00m
Oct  9 09:59:29 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.535 2 INFO oslo.privsep.daemon [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1s6af3x0/privsep.sock']#033[00m
Oct  9 09:59:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:29 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.073 2 INFO oslo.privsep.daemon [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.989 764 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.993 764 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.995 764 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.995 764 INFO oslo.privsep.daemon [-] privsep daemon running as pid 764#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.151 764 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  9 09:59:30 compute-2 podman[168922]: 2025-10-09 09:59:30.201679357 +0000 UTC m=+0.034994434 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:59:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:30 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:31.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:31 compute-2 nova_compute[163961]: 2025-10-09 09:59:31.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:31 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:32 compute-2 podman[168940]: 2025-10-09 09:59:32.211438819 +0000 UTC m=+0.043777802 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  9 09:59:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:32 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:33 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.142 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.143 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.143 2 DEBUG nova.objects.instance [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.353 2 DEBUG nova.objects.instance [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_requests' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.363 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  9 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.472 2 DEBUG nova.policy [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  9 09:59:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:34.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:34 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:35 compute-2 nova_compute[163961]: 2025-10-09 09:59:35.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:35 compute-2 nova_compute[163961]: 2025-10-09 09:59:35.293 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully created port: 8bfb9190-a455-483f-a18f-f65db3220f30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  9 09:59:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:35 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.247 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully updated port: 8bfb9190-a455-483f-a18f-f65db3220f30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.258 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.258 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.259 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG nova.compute.manager [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG nova.compute.manager [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:36.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:36 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.672 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.virt.libvirt.vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.689 2 DEBUG os_vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bfb9190-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bfb9190-a4, col_values=(('external_ids', {'iface-id': '8bfb9190-a455-483f-a18f-f65db3220f30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:80:50', 'vm-uuid': 'c7c7e2ca-e694-465f-941e-15513c7e91ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.6952] manager: (tap8bfb9190-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.700 2 INFO os_vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4')#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.virt.libvirt.vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.703 2 DEBUG nova.virt.libvirt.guest [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] attach device xml: <interface type="ethernet">
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <mac address="fa:16:3e:79:80:50"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <model type="virtio"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <mtu size="1442"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <target dev="tap8bfb9190-a4"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]: </interface>
Oct  9 09:59:37 compute-2 nova_compute[163961]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  9 09:59:37 compute-2 kernel: tap8bfb9190-a4: entered promiscuous mode
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7108] manager: (tap8bfb9190-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00034|binding|INFO|Claiming lport 8bfb9190-a455-483f-a18f-f65db3220f30 for this chassis.
Oct  9 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00035|binding|INFO|8bfb9190-a455-483f-a18f-f65db3220f30: Claiming fa:16:3e:79:80:50 10.100.0.28
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.723 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:80:50 10.100.0.28'], port_security=['fa:16:3e:79:80:50 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=8bfb9190-a455-483f-a18f-f65db3220f30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.724 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 8bfb9190-a455-483f-a18f-f65db3220f30 in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 bound to our chassis#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.725 71793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.733 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2db4e1-6f9b-4f11-8486-ebdfd2443a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.733 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f792301-c1 in ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.735 168221 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f792301-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.735 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[9aff95a9-91df-431b-9001-13a50cafbaae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.737 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb4ca50-d9bc-46ee-89bb-7766d76dc340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 systemd-udevd[168996]: Network interface NamePolicy= disabled on kernel command line.
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.751 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[bf266666-f2ef-4e1b-861b-59795b76da6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7604] device (tap8bfb9190-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7609] device (tap8bfb9190-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00036|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 ovn-installed in OVS
Oct  9 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00037|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 up in Southbound
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.766 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:1b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:79:80:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.783 2 DEBUG nova.virt.libvirt.guest [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:flavor name="m1.nano">
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:memory>128</nova:memory>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:disk>1</nova:disk>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:swap>0</nova:swap>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:vcpus>1</nova:vcpus>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  </nova:flavor>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:owner>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  </nova:owner>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  <nova:ports>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct  9 09:59:37 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct  9 09:59:37 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  9 09:59:37 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 09:59:37 compute-2 nova_compute[163961]:  </nova:ports>
Oct  9 09:59:37 compute-2 nova_compute[163961]: </nova:instance>
Oct  9 09:59:37 compute-2 nova_compute[163961]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.785 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[43a366fa-166d-4324-a0e2-8376c621664f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.800 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.804 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[18fc7233-497e-4326-9646-55611a13387a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.809 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[c2249c19-9598-4782-ac9c-a56990517e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.8096] manager: (tap4f792301-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.829 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[36fd3e89-5a40-461f-8be5-7e6895ef8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.832 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d36180-2477-461f-a49d-a46ba4d63346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.8477] device (tap4f792301-c0): carrier: link connected
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.852 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d2ab23-43cf-4efa-8502-c4d51f2b35ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.864 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[dc663f1a-dafc-46c5-9114-0400eb80e92d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166228, 'reachable_time': 37744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169014, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.875 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[45abeea4-b1e6-450e-a709-6bbe690f17e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:7e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 166228, 'tstamp': 166228}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 169015, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.886 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[2b373977-deaa-4ebc-9f2f-9dc42827577b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166228, 'reachable_time': 37744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 169016, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:37 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.905 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1ab65-f9aa-4d26-aeac-5f050896cc7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.942 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[f252b447-00fc-43e6-b833-da97e0496db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f792301-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.9456] manager: (tap4f792301-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  9 09:59:37 compute-2 kernel: tap4f792301-c0: entered promiscuous mode
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.949 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f792301-c0, col_values=(('external_ids', {'iface-id': '704a96af-9e0f-4b61-9b53-029cbdc713e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00038|binding|INFO|Releasing lport 704a96af-9e0f-4b61-9b53-029cbdc713e8 from this chassis (sb_readonly=0)
Oct  9 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.963 71793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.963 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[28ea801e-1924-4fde-81bb-2b3e444e1629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.964 71793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: global
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    log         /dev/log local0 debug
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    log-tag     haproxy-metadata-proxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    user        root
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    group       root
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    maxconn     1024
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    pidfile     /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    daemon
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: defaults
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    log global
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    mode http
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    option httplog
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    option dontlognull
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    option http-server-close
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    option forwardfor
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    retries                 3
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    timeout http-request    30s
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    timeout connect         30s
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    timeout client          32s
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    timeout server          32s
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    timeout http-keep-alive 30s
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: listen listener
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    bind 169.254.169.254:80
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    server metadata /var/lib/neutron/metadata_proxy
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]:    http-request add-header X-OVN-Network-ID 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  9 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.964 71793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'env', 'PROCESS_TAG=haproxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f792301-cf2d-455d-8ad6-8a55cc3146e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.242699355 +0000 UTC m=+0.033715565 container create 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 09:59:38 compute-2 systemd[1]: Started libpod-conmon-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope.
Oct  9 09:59:38 compute-2 systemd[1]: Started libcrun container.
Oct  9 09:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3fd04afb0f5989930c0fda502bb4a65862b9bd9c1ecbb0d35d11811aaed28a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  9 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.306457076 +0000 UTC m=+0.097473306 container init 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.310674036 +0000 UTC m=+0.101690244 container start 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  9 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.226608103 +0000 UTC m=+0.017624332 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  9 09:59:38 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : New worker (169064) forked
Oct  9 09:59:38 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : Loading success.
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.404 2 DEBUG nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.404 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.406 2 WARNING nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:59:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.760 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.761 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.774 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:38 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:39 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:40 compute-2 podman[169070]: 2025-10-09 09:59:40.226429203 +0000 UTC m=+0.060045345 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 09:59:40 compute-2 ovn_controller[62794]: 2025-10-09T09:59:40Z|00006|pinctrl(ovn_pinctrl1)|INFO|DHCPOFFER fa:16:3e:79:80:50 10.100.0.28
Oct  9 09:59:40 compute-2 ovn_controller[62794]: 2025-10-09T09:59:40Z|00007|pinctrl(ovn_pinctrl1)|INFO|DHCPACK fa:16:3e:79:80:50 10.100.0.28
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 DEBUG nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 WARNING nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.#033[00m
Oct  9 09:59:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:40 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:41 compute-2 nova_compute[163961]: 2025-10-09 09:59:41.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:41 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:42.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:42 compute-2 nova_compute[163961]: 2025-10-09 09:59:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:42 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:43 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct  9 09:59:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct  9 09:59:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:44 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:45 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.380286) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986380313, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 250, "total_data_size": 3946176, "memory_usage": 3994632, "flush_reason": "Manual Compaction"}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986384944, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1603308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23183, "largest_seqno": 24741, "table_properties": {"data_size": 1598171, "index_size": 2405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13335, "raw_average_key_size": 20, "raw_value_size": 1586997, "raw_average_value_size": 2456, "num_data_blocks": 104, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003856, "oldest_key_time": 1760003856, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4677 microseconds, and 3270 cpu microseconds.
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384968) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1603308 bytes OK
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384981) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385542) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385554) EVENT_LOG_v1 {"time_micros": 1760003986385551, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3938945, prev total WAL file size 3938945, number of live WAL files 2.
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386258) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1565KB)], [42(13MB)]
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986386280, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16204156, "oldest_snapshot_seqno": -1}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5589 keys, 13065364 bytes, temperature: kUnknown
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986423770, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13065364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13028526, "index_size": 21752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 140469, "raw_average_key_size": 25, "raw_value_size": 12927661, "raw_average_value_size": 2313, "num_data_blocks": 891, "num_entries": 5589, "num_filter_entries": 5589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423945) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13065364 bytes
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.424325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 431.9 rd, 348.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(8.1) OK, records in: 6047, records dropped: 458 output_compression: NoCompression
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.424338) EVENT_LOG_v1 {"time_micros": 1760003986424332, "job": 24, "event": "compaction_finished", "compaction_time_micros": 37522, "compaction_time_cpu_micros": 21407, "output_level": 6, "num_output_files": 1, "total_output_size": 13065364, "num_input_records": 6047, "num_output_records": 5589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986424604, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986426204, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 09:59:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:46 compute-2 nova_compute[163961]: 2025-10-09 09:59:46.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:46 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:47 compute-2 nova_compute[163961]: 2025-10-09 09:59:47.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:47 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:48 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:49 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:50 compute-2 podman[169103]: 2025-10-09 09:59:50.207401178 +0000 UTC m=+0.039886465 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 09:59:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:50.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:50 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:51 compute-2 nova_compute[163961]: 2025-10-09 09:59:51.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:51 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:52.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:52 compute-2 nova_compute[163961]: 2025-10-09 09:59:52.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:52 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:53 compute-2 nova_compute[163961]: 2025-10-09 09:59:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:53 compute-2 nova_compute[163961]: 2025-10-09 09:59:53.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:53 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.189 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2606411561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.550 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.604 2 DEBUG nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.605 2 DEBUG nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.822 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.824 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4862MB free_disk=59.92177200317383GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.824 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.825 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Instance c7c7e2ca-e694-465f-941e-15513c7e91ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 09:59:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:54 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.900 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3859217005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.251 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.255 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.281 2 ERROR nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [req-36fd6b6a-bf1f-43c1-8762-07164ceae307] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 41a86af9-054a-49c9-9d2e-f0396c1c31a8.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-36fd6b6a-bf1f-43c1-8762-07164ceae307"}]}#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.295 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.309 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.310 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.323 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.338 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.362 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 09:59:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 09:59:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/136677374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.702 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.705 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updated inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.755 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 09:59:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:55 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 09:59:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.756 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.756 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.757 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:56 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG nova.network.neutron [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  9 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG nova.objects.instance [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 09:59:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:57.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 09:59:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:57 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.404 2 DEBUG nova.network.neutron [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.421 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 09:59:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:58 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 09:59:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 09:59:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 09:59:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 09:59:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:59 2025: (VI_0) received an invalid passwd!
Oct  9 09:59:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:00 compute-2 ceph-mon[5983]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:00:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:01 compute-2 systemd[1]: Starting system activity accounting tool...
Oct  9 10:00:01 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:00:01 compute-2 systemd[1]: Finished system activity accounting tool.
Oct  9 10:00:01 compute-2 podman[169223]: 2025-10-09 10:00:01.199995722 +0000 UTC m=+0.034283454 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct  9 10:00:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:01.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:01 compute-2 nova_compute[163961]: 2025-10-09 10:00:01.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:02.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:02 compute-2 nova_compute[163961]: 2025-10-09 10:00:02.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:03 compute-2 podman[169241]: 2025-10-09 10:00:03.206088934 +0000 UTC m=+0.037929758 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 10:00:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:04.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:06.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:06 compute-2 nova_compute[163961]: 2025-10-09 10:00:06.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:07.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:07 compute-2 nova_compute[163961]: 2025-10-09 10:00:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.349 2 DEBUG nova.compute.manager [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG nova.compute.manager [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:00:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.282 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  9 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.283 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.296 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:00:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:11 compute-2 podman[169266]: 2025-10-09 10:00:11.217656186 +0000 UTC m=+0.053861709 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 10:00:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:11.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:11 compute-2 nova_compute[163961]: 2025-10-09 10:00:11.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:12.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:12 compute-2 nova_compute[163961]: 2025-10-09 10:00:12.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:13.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:14.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:15.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:16 compute-2 nova_compute[163961]: 2025-10-09 10:00:16.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:17.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:17 compute-2 nova_compute[163961]: 2025-10-09 10:00:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:18.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:21 compute-2 podman[169325]: 2025-10-09 10:00:21.21255292 +0000 UTC m=+0.041865818 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.393990) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021394019, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 591, "num_deletes": 257, "total_data_size": 926600, "memory_usage": 939392, "flush_reason": "Manual Compaction"}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021397375, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 609209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24746, "largest_seqno": 25332, "table_properties": {"data_size": 606288, "index_size": 893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 17, "raw_value_size": 600364, "raw_average_value_size": 1592, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003987, "oldest_key_time": 1760003987, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3409 microseconds, and 2536 cpu microseconds.
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397400) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 609209 bytes OK
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397410) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398297) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398309) EVENT_LOG_v1 {"time_micros": 1760004021398306, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398317) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 923226, prev total WAL file size 923226, number of live WAL files 2.
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398597) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(594KB)], [45(12MB)]
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021398621, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13674573, "oldest_snapshot_seqno": -1}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5444 keys, 13531201 bytes, temperature: kUnknown
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021426368, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13531201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13494479, "index_size": 22020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 138662, "raw_average_key_size": 25, "raw_value_size": 13395310, "raw_average_value_size": 2460, "num_data_blocks": 899, "num_entries": 5444, "num_filter_entries": 5444, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.426494) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13531201 bytes
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.436016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 492.2 rd, 487.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(44.7) write-amplify(22.2) OK, records in: 5966, records dropped: 522 output_compression: NoCompression
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.436031) EVENT_LOG_v1 {"time_micros": 1760004021436024, "job": 26, "event": "compaction_finished", "compaction_time_micros": 27782, "compaction_time_cpu_micros": 19207, "output_level": 6, "num_output_files": 1, "total_output_size": 13531201, "num_input_records": 5966, "num_output_records": 5444, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021436167, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021437504, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:00:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:21 compute-2 nova_compute[163961]: 2025-10-09 10:00:21.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:00:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:22.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:22 compute-2 nova_compute[163961]: 2025-10-09 10:00:22.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:23.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:24.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:26.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:26 compute-2 nova_compute[163961]: 2025-10-09 10:00:26.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:00:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:27 compute-2 nova_compute[163961]: 2025-10-09 10:00:27.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:30.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:31 compute-2 nova_compute[163961]: 2025-10-09 10:00:31.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:32 compute-2 podman[169457]: 2025-10-09 10:00:32.20483604 +0000 UTC m=+0.039190895 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 10:00:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:32 compute-2 nova_compute[163961]: 2025-10-09 10:00:32.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:34 compute-2 podman[169500]: 2025-10-09 10:00:34.234554939 +0000 UTC m=+0.070174758 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 10:00:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:35.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:36.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:36 compute-2 nova_compute[163961]: 2025-10-09 10:00:36.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:37 compute-2 nova_compute[163961]: 2025-10-09 10:00:37.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:38 compute-2 ovn_controller[62794]: 2025-10-09T10:00:38Z|00039|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct  9 10:00:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:41 compute-2 nova_compute[163961]: 2025-10-09 10:00:41.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:42 compute-2 podman[169526]: 2025-10-09 10:00:42.220405896 +0000 UTC m=+0.055640477 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  9 10:00:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:42 compute-2 nova_compute[163961]: 2025-10-09 10:00:42.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:43.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:44.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:45.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:46.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:46 compute-2 nova_compute[163961]: 2025-10-09 10:00:46.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:47 compute-2 nova_compute[163961]: 2025-10-09 10:00:47.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:47.449 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:47.449 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:00:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:47.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:47 compute-2 nova_compute[163961]: 2025-10-09 10:00:47.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:48.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.299 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.299 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.314 2 DEBUG nova.objects.instance [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.328 2 DEBUG nova.virt.libvirt.vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.328 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.329 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.331 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.332 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.334 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Attempting to detach device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.334 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <mac address="fa:16:3e:79:80:50"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <model type="virtio"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <mtu size="1442"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <target dev="tap8bfb9190-a4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </interface>
Oct  9 10:00:49 compute-2 nova_compute[163961]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.337 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.339 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface>not found in domain: <domain type='kvm' id='1'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <name>instance-00000006</name>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <metadata>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:flavor name="m1.nano">
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:memory>128</nova:memory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:disk>1</nova:disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:swap>0</nova:swap>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:vcpus>1</nova:vcpus>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:flavor>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </metadata>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <memory unit='KiB'>131072</memory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <vcpu placement='static'>1</vcpu>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <resource>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <partition>/machine</partition>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </resource>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <sysinfo type='smbios'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <system>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='manufacturer'>RDO</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='product'>OpenStack Compute</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='serial'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='uuid'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='family'>Virtual Machine</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </system>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </sysinfo>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <os>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <boot dev='hd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <smbios mode='sysinfo'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </os>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <features>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <acpi/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <apic/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <vmcoreinfo state='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </features>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <cpu mode='custom' match='exact' check='full'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <model fallback='forbid'>EPYC-Milan</model>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <vendor>AMD</vendor>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='x2apic'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='tsc-deadline'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='hypervisor'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='tsc_adjust'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='vaes'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='vpclmulqdq'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='spec-ctrl'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='stibp'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='arch-capabilities'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='ssbd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='cmp_legacy'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='overflow-recov'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='succor'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='virt-ssbd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='lbrv'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='tsc-scale'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='vmcb-clean'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='flushbyasid'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='pause-filter'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='pfthreshold'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='v-vmsave-vmload'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='vgif'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='rdctl-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='mds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='pschange-mc-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='gds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='rfds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='svm'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='topoext'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='npt'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='nrip-save'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </cpu>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <clock offset='utc'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='pit' tickpolicy='delay'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='hpet' present='no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </clock>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_poweroff>destroy</on_poweroff>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_reboot>restart</on_reboot>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_crash>destroy</on_crash>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <devices>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <disk type='network' device='disk'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <auth username='openstack'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </auth>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk' index='2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.100' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.102' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.101' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </source>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='vda' bus='virtio'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='virtio-disk0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <disk type='network' device='cdrom'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <auth username='openstack'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </auth>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config' index='1'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.100' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.102' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.101' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </source>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='sda' bus='sata'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <readonly/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='sata0-0-0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='0' model='pcie-root'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pcie.0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='1' port='0x10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='2' port='0x11'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='3' port='0x12'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='4' port='0x13'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='5' port='0x14'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='6' port='0x15'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='7' port='0x16'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='8' port='0x17'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.8'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='9' port='0x18'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.9'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='10' port='0x19'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='11' port='0x1a'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.11'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='12' port='0x1b'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.12'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='13' port='0x1c'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.13'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='14' port='0x1d'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.14'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='15' port='0x1e'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.15'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='16' port='0x1f'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.16'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='17' port='0x20'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.17'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='18' port='0x21'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.18'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='19' port='0x22'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.19'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='20' port='0x23'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.20'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='21' port='0x24'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.21'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='22' port='0x25'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.22'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='23' port='0x26'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.23'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='24' port='0x27'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.24'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='25' port='0x28'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.25'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-pci-bridge'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.26'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='usb'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='sata' index='0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='ide'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <interface type='ethernet'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mac address='fa:16:3e:d9:1b:2f'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='tap55484b13-54'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model type='virtio'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mtu size='1442'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='net0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </interface>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <interface type='ethernet'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mac address='fa:16:3e:79:80:50'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='tap8bfb9190-a4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model type='virtio'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mtu size='1442'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='net1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </interface>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <serial type='pty'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source path='/dev/pts/0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target type='isa-serial' port='0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <model name='isa-serial'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </target>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='serial0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </serial>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <console type='pty' tty='/dev/pts/0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source path='/dev/pts/0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target type='serial' port='0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='serial0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </console>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='tablet' bus='usb'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='usb' bus='0' port='1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='mouse' bus='ps2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='keyboard' bus='ps2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <listen type='address' address='::0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </graphics>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <audio id='1' type='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <video>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model type='virtio' heads='1' primary='yes'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='video0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </video>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <watchdog model='itco' action='reset'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='watchdog0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </watchdog>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <memballoon model='virtio'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <stats period='10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='balloon0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </memballoon>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <rng model='virtio'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <backend model='random'>/dev/urandom</backend>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='rng0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </rng>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </devices>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <label>system_u:system_r:svirt_t:s0:c477,c914</label>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c477,c914</imagelabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </seclabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <label>+107:+107</label>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <imagelabel>+107:+107</imagelabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </seclabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </domain>
Oct  9 10:00:49 compute-2 nova_compute[163961]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.339 2 INFO nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the persistent domain config.#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.340 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] (1/8): Attempting to detach device tap8bfb9190-a4 with device alias net1 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.340 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <mac address="fa:16:3e:79:80:50"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <model type="virtio"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <driver name="vhost" rx_queue_size="512"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <mtu size="1442"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <target dev="tap8bfb9190-a4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </interface>
Oct  9 10:00:49 compute-2 nova_compute[163961]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  9 10:00:49 compute-2 kernel: tap8bfb9190-a4 (unregistering): left promiscuous mode
Oct  9 10:00:49 compute-2 NetworkManager[984]: <info>  [1760004049.4362] device (tap8bfb9190-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00040|binding|INFO|Releasing lport 8bfb9190-a455-483f-a18f-f65db3220f30 from this chassis (sb_readonly=0)
Oct  9 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00041|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 down in Southbound
Oct  9 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00042|binding|INFO|Removing iface tap8bfb9190-a4 ovn-installed in OVS
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.453 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:80:50 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=8bfb9190-a455-483f-a18f-f65db3220f30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.455 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 8bfb9190-a455-483f-a18f-f65db3220f30 in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 unbound from our chassis#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.456 71793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.457 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[406f3ba4-adb8-470c-86ce-feae033adf39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.457 71793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace which is not needed anymore#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.456 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Received event <DeviceRemovedEvent: 1760004049.4561546, c7c7e2ca-e694-465f-941e-15513c7e91ab => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.457 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Start waiting for the detach event from libvirt for device tap8bfb9190-a4 with device alias net1 for instance c7c7e2ca-e694-465f-941e-15513c7e91ab _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.457 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.462 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface>not found in domain: <domain type='kvm' id='1'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <name>instance-00000006</name>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <metadata>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:flavor name="m1.nano">
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:memory>128</nova:memory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:disk>1</nova:disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:swap>0</nova:swap>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:vcpus>1</nova:vcpus>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:flavor>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </metadata>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <memory unit='KiB'>131072</memory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <vcpu placement='static'>1</vcpu>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <resource>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <partition>/machine</partition>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </resource>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <sysinfo type='smbios'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <system>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='manufacturer'>RDO</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='product'>OpenStack Compute</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='serial'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='uuid'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <entry name='family'>Virtual Machine</entry>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </system>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </sysinfo>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <os>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <boot dev='hd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <smbios mode='sysinfo'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </os>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <features>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <acpi/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <apic/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <vmcoreinfo state='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </features>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <cpu mode='custom' match='exact' check='full'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <model fallback='forbid'>EPYC-Milan</model>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <vendor>AMD</vendor>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='x2apic'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='tsc-deadline'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='hypervisor'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='tsc_adjust'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='vaes'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='vpclmulqdq'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='spec-ctrl'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='stibp'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='arch-capabilities'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='ssbd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='cmp_legacy'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='overflow-recov'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='succor'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='virt-ssbd'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='lbrv'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='tsc-scale'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='vmcb-clean'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='flushbyasid'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='pause-filter'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='pfthreshold'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='v-vmsave-vmload'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='vgif'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='rdctl-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='mds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='pschange-mc-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='gds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='rfds-no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='svm'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='require' name='topoext'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='npt'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='nrip-save'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </cpu>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <clock offset='utc'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='pit' tickpolicy='delay'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <timer name='hpet' present='no'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </clock>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_poweroff>destroy</on_poweroff>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_reboot>restart</on_reboot>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <on_crash>destroy</on_crash>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <devices>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <disk type='network' device='disk'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <auth username='openstack'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </auth>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk' index='2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.100' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.102' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.101' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </source>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='vda' bus='virtio'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='virtio-disk0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <disk type='network' device='cdrom'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='qemu' type='raw' cache='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <auth username='openstack'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </auth>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config' index='1'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.100' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.102' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <host name='192.168.122.101' port='6789'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </source>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='sda' bus='sata'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <readonly/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='sata0-0-0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='0' model='pcie-root'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pcie.0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='1' port='0x10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='2' port='0x11'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='3' port='0x12'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='4' port='0x13'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='5' port='0x14'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='6' port='0x15'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='7' port='0x16'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='8' port='0x17'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.8'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='9' port='0x18'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.9'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='10' port='0x19'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='11' port='0x1a'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.11'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='12' port='0x1b'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.12'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='13' port='0x1c'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.13'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='14' port='0x1d'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.14'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='15' port='0x1e'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.15'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='16' port='0x1f'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.16'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='17' port='0x20'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.17'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='18' port='0x21'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.18'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='19' port='0x22'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.19'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='20' port='0x23'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.20'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='21' port='0x24'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.21'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='22' port='0x25'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.22'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='23' port='0x26'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.23'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='24' port='0x27'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.24'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-root-port'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target chassis='25' port='0x28'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.25'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model name='pcie-pci-bridge'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='pci.26'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='usb'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <controller type='sata' index='0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='ide'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </controller>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <interface type='ethernet'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mac address='fa:16:3e:d9:1b:2f'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target dev='tap55484b13-54'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model type='virtio'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <driver name='vhost' rx_queue_size='512'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <mtu size='1442'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='net0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </interface>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <serial type='pty'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source path='/dev/pts/0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target type='isa-serial' port='0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:        <model name='isa-serial'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      </target>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='serial0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </serial>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <console type='pty' tty='/dev/pts/0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <source path='/dev/pts/0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <target type='serial' port='0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='serial0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </console>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='tablet' bus='usb'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='usb' bus='0' port='1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='mouse' bus='ps2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input1'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <input type='keyboard' bus='ps2'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='input2'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </input>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <listen type='address' address='::0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </graphics>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <audio id='1' type='none'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <video>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <model type='virtio' heads='1' primary='yes'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='video0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </video>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <watchdog model='itco' action='reset'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='watchdog0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </watchdog>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <memballoon model='virtio'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <stats period='10'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='balloon0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </memballoon>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <rng model='virtio'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <backend model='random'>/dev/urandom</backend>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <alias name='rng0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </rng>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </devices>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <label>system_u:system_r:svirt_t:s0:c477,c914</label>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c477,c914</imagelabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </seclabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <label>+107:+107</label>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <imagelabel>+107:+107</imagelabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </seclabel>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </domain>
Oct  9 10:00:49 compute-2 nova_compute[163961]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.462 2 INFO nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the live domain config.#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.virt.libvirt.vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.464 2 DEBUG os_vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bfb9190-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.477 2 INFO os_vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4')#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.477 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:creationTime>2025-10-09 10:00:49</nova:creationTime>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:flavor name="m1.nano">
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:memory>128</nova:memory>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:disk>1</nova:disk>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:swap>0</nova:swap>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:ephemeral>0</nova:ephemeral>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:vcpus>1</nova:vcpus>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:flavor>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:owner>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  <nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct  9 10:00:49 compute-2 nova_compute[163961]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  9 10:00:49 compute-2 nova_compute[163961]:    </nova:port>
Oct  9 10:00:49 compute-2 nova_compute[163961]:  </nova:ports>
Oct  9 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct  9 10:00:49 compute-2 nova_compute[163961]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  9 10:00:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : haproxy version is 2.8.14-c23fe91
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : path to executable is /usr/sbin/haproxy
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : Exiting Master process...
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : Exiting Master process...
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [ALERT]    (169062) : Current worker (169064) exited with code 143 (Terminated)
Oct  9 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : All workers exited. Exiting... (0)
Oct  9 10:00:49 compute-2 systemd[1]: libpod-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope: Deactivated successfully.
Oct  9 10:00:49 compute-2 conmon[169058]: conmon 83c3e73ac0e8f23f4c80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope/container/memory.events
Oct  9 10:00:49 compute-2 podman[169575]: 2025-10-09 10:00:49.557984177 +0000 UTC m=+0.033531525 container died 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  9 10:00:49 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87-userdata-shm.mount: Deactivated successfully.
Oct  9 10:00:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-3a3fd04afb0f5989930c0fda502bb4a65862b9bd9c1ecbb0d35d11811aaed28a-merged.mount: Deactivated successfully.
Oct  9 10:00:49 compute-2 podman[169575]: 2025-10-09 10:00:49.586299769 +0000 UTC m=+0.061847117 container cleanup 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  9 10:00:49 compute-2 systemd[1]: libpod-conmon-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope: Deactivated successfully.
Oct  9 10:00:49 compute-2 podman[169603]: 2025-10-09 10:00:49.625159779 +0000 UTC m=+0.024261600 container remove 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.629 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[1dde37e0-480d-4f60-a38e-a69fe2c49437]: (4, ('Thu Oct  9 10:00:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87)\n83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87\nThu Oct  9 10:00:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87)\n83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.630 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca02b48-ccde-4395-8b93-6aa998dca7c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.631 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 kernel: tap4f792301-c0: left promiscuous mode
Oct  9 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.649 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[d12dfc7a-5778-48dc-bcff-be6ead284c20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:49.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.663 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbcd4a7-2f2d-4820-b7ec-579c7672a8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.664 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[6b644519-8ea3-4565-b53f-53a5e7e9a935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.676 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[c732f0ae-d817-4fdc-b763-c594731e21fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166223, 'reachable_time': 21749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169614, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 systemd[1]: run-netns-ovnmeta\x2d4f792301\x2dcf2d\x2d455d\x2d8ad6\x2d8a55cc3146e9.mount: Deactivated successfully.
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.683 72006 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.684 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b3a848-55df-4705-95c9-0a972cfadb5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.107 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.107 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.108 2 DEBUG nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  9 10:00:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:50.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.225 2 INFO nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Port 8bfb9190-a455-483f-a18f-f65db3220f30 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.225 2 DEBUG nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.237 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.251 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:51 compute-2 ovn_controller[62794]: 2025-10-09T10:00:51Z|00043|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.478 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 WARNING nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 WARNING nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.#033[00m
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-deleted-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:52 compute-2 podman[169618]: 2025-10-09 10:00:52.208274939 +0000 UTC m=+0.043897597 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 INFO nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Terminating instance#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.282 2 DEBUG nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  9 10:00:52 compute-2 kernel: tap55484b13-54 (unregistering): left promiscuous mode
Oct  9 10:00:52 compute-2 NetworkManager[984]: <info>  [1760004052.3177] device (tap55484b13-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  9 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00044|binding|INFO|Releasing lport 55484b13-541c-4895-beab-bdcdaa30f4fe from this chassis (sb_readonly=0)
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00045|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe down in Southbound
Oct  9 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00046|binding|INFO|Removing iface tap55484b13-54 ovn-installed in OVS
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.328 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1b:2f 10.100.0.6'], port_security=['fa:16:3e:d9:1b:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72489230-c514-4cf9-bf1c-35e063204738', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=55484b13-541c-4895-beab-bdcdaa30f4fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.329 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 55484b13-541c-4895-beab-bdcdaa30f4fe in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 unbound from our chassis#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.330 71793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab21f371-26e2-4c4f-bba0-3c44fb308723, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.330 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[50d11188-3937-47b1-a6b8-ae0828875c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.331 71793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 namespace which is not needed anymore#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  9 10:00:52 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 14.845s CPU time.
Oct  9 10:00:52 compute-2 systemd-machined[121527]: Machine qemu-1-instance-00000006 terminated.
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : haproxy version is 2.8.14-c23fe91
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : path to executable is /usr/sbin/haproxy
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : Exiting Master process...
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : Exiting Master process...
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [ALERT]    (168333) : Current worker (168335) exited with code 143 (Terminated)
Oct  9 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : All workers exited. Exiting... (0)
Oct  9 10:00:52 compute-2 systemd[1]: libpod-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope: Deactivated successfully.
Oct  9 10:00:52 compute-2 conmon[168329]: conmon 53e2b84d792e150b60f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope/container/memory.events
Oct  9 10:00:52 compute-2 podman[169654]: 2025-10-09 10:00:52.427100437 +0000 UTC m=+0.035464279 container died 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  9 10:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc-userdata-shm.mount: Deactivated successfully.
Oct  9 10:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-d3b78e34eb4bd48f615c51f92b1c60c1faa9ea89e7ed53520625d65534f9f4de-merged.mount: Deactivated successfully.
Oct  9 10:00:52 compute-2 podman[169654]: 2025-10-09 10:00:52.453453719 +0000 UTC m=+0.061817561 container cleanup 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:00:52 compute-2 systemd[1]: libpod-conmon-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope: Deactivated successfully.
Oct  9 10:00:52 compute-2 NetworkManager[984]: <info>  [1760004052.4939] manager: (tap55484b13-54): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct  9 10:00:52 compute-2 podman[169700]: 2025-10-09 10:00:52.500853239 +0000 UTC m=+0.031428170 container remove 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.507 2 INFO nova.virt.libvirt.driver [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance destroyed successfully.#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.507 2 DEBUG nova.objects.instance [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.506 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8e1c15-4948-4d34-b6c7-00a2c393ff80]: (4, ('Thu Oct  9 10:00:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 (53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc)\n53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc\nThu Oct  9 10:00:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 (53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc)\n53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.509 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[ee50347d-d9c1-40e9-b6a6-9aff64916f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.510 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21f371-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 kernel: tapab21f371-20: left promiscuous mode
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.521 2 DEBUG nova.virt.libvirt.vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.521 2 DEBUG nova.network.os_vif_util [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.522 2 DEBUG nova.network.os_vif_util [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.522 2 DEBUG os_vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55484b13-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.531 2 INFO os_vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54')#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.532 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[74dcc56f-73c8-4b30-a23d-b38c721714e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.547 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[cd658022-bd75-4693-a4b2-157c09d5ccfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.548 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[03b0b621-91a9-4623-9d7d-6dbf5b3cb341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.561 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[d00236b6-2fed-45dc-a9d1-e137eef4fd22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163839, 'reachable_time': 20769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169747, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 systemd[1]: run-netns-ovnmeta\x2dab21f371\x2d26e2\x2d4c4f\x2dbba0\x2d3c44fb308723.mount: Deactivated successfully.
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.565 72006 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  9 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.565 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[57801d77-a591-482d-812f-991dac282980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.687 2 INFO nova.virt.libvirt.driver [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deleting instance files /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab_del#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.688 2 INFO nova.virt.libvirt.driver [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deletion of /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab_del complete#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.725 2 DEBUG nova.virt.libvirt.host [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.725 2 INFO nova.virt.libvirt.host [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] UEFI support detected#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.726 2 INFO nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.726 2 DEBUG oslo.service.loopingcall [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.727 2 DEBUG nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  9 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.727 2 DEBUG nova.network.neutron [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  9 10:00:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.243 2 DEBUG nova.network.neutron [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.265 2 INFO nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 0.54 seconds to deallocate network for instance.#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.299 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.300 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.407 2 DEBUG oslo_concurrency.processutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.482 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.482 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.581 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  9 10:00:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1311931696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.758 2 DEBUG oslo_concurrency.processutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.762 2 DEBUG nova.compute.provider_tree [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.773 2 DEBUG nova.scheduler.client.report [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.790 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.830 2 INFO nova.scheduler.client.report [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance c7c7e2ca-e694-465f-941e-15513c7e91ab#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.870 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.941 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.951 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.951 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 WARNING nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.954 2 WARNING nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state deleted and task_state None.#033[00m
Oct  9 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.954 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-deleted-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  9 10:00:54 compute-2 nova_compute[163961]: 2025-10-09 10:00:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-2 nova_compute[163961]: 2025-10-09 10:00:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:54 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:54.451 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:00:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.183 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.180 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:89:5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=188102c6-f5ba-4733-92be-2659db7ae55a) old=Port_Binding(mac=['fa:16:3e:77:89:5b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.181 71793 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 188102c6-f5ba-4733-92be-2659db7ae55a in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 updated#033[00m
Oct  9 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.181 71793 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ab21f371-26e2-4c4f-bba0-3c44fb308723 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.182 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb95d57-869e-4d72-8c9f-c9781cbec431]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.202 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.202 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2786803058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.550 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:00:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.754 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4994MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.755 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.756 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.793 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.794 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.805 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:00:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2722941605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.146 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.150 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.246 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.263 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.263 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.264 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.264 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.272 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:00:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:58 compute-2 nova_compute[163961]: 2025-10-09 10:00:58.261 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:58 compute-2 nova_compute[163961]: 2025-10-09 10:00:58.262 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:59 compute-2 nova_compute[163961]: 2025-10-09 10:00:59.169 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:00:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:00:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:00:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:00:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:00:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:00:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:01.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:01 compute-2 nova_compute[163961]: 2025-10-09 10:01:01.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:02 compute-2 nova_compute[163961]: 2025-10-09 10:01:02.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:03 compute-2 podman[169845]: 2025-10-09 10:01:03.211509037 +0000 UTC m=+0.044255174 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 10:01:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:03.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:04.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:05 compute-2 podman[169864]: 2025-10-09 10:01:05.233885682 +0000 UTC m=+0.068182261 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  9 10:01:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:05.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:06 compute-2 nova_compute[163961]: 2025-10-09 10:01:06.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.506 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004052.5048726, c7c7e2ca-e694-465f-941e-15513c7e91ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  9 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.506 2 INFO nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Stopped (Lifecycle Event)#033[00m
Oct  9 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.520 2 DEBUG nova.compute.manager [None req-5263c0d5-61ed-4406-a3a0-445c32618551 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  9 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:01:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:07.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:01:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.308815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068308865, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 723, "num_deletes": 251, "total_data_size": 1417510, "memory_usage": 1444272, "flush_reason": "Manual Compaction"}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312095, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 932885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25337, "largest_seqno": 26055, "table_properties": {"data_size": 929346, "index_size": 1383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8231, "raw_average_key_size": 19, "raw_value_size": 922210, "raw_average_value_size": 2190, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004022, "oldest_key_time": 1760004022, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3301 microseconds, and 2165 cpu microseconds.
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312120) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 932885 bytes OK
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312131) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312483) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312495) EVENT_LOG_v1 {"time_micros": 1760004068312492, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1413626, prev total WAL file size 1413626, number of live WAL files 2.
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(911KB)], [48(12MB)]
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312882, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14464086, "oldest_snapshot_seqno": -1}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5349 keys, 12339798 bytes, temperature: kUnknown
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068339374, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12339798, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304704, "index_size": 20648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 137381, "raw_average_key_size": 25, "raw_value_size": 12208121, "raw_average_value_size": 2282, "num_data_blocks": 837, "num_entries": 5349, "num_filter_entries": 5349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.339489) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12339798 bytes
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.345880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 545.4 rd, 465.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(28.7) write-amplify(13.2) OK, records in: 5865, records dropped: 516 output_compression: NoCompression
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.345893) EVENT_LOG_v1 {"time_micros": 1760004068345888, "job": 28, "event": "compaction_finished", "compaction_time_micros": 26520, "compaction_time_cpu_micros": 17825, "output_level": 6, "num_output_files": 1, "total_output_size": 12339798, "num_input_records": 5865, "num_output_records": 5349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068346042, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068347752, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:01:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:09.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:10 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  9 10:01:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:11 compute-2 nova_compute[163961]: 2025-10-09 10:01:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:12 compute-2 nova_compute[163961]: 2025-10-09 10:01:12.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:12 compute-2 podman[169914]: 2025-10-09 10:01:12.631404344 +0000 UTC m=+0.060333643 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:01:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:16 compute-2 nova_compute[163961]: 2025-10-09 10:01:16.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:17 compute-2 nova_compute[163961]: 2025-10-09 10:01:17.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:21.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:21 compute-2 nova_compute[163961]: 2025-10-09 10:01:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:22 compute-2 nova_compute[163961]: 2025-10-09 10:01:22.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:23 compute-2 podman[169949]: 2025-10-09 10:01:23.201564653 +0000 UTC m=+0.036265860 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:01:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:26.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:26 compute-2 nova_compute[163961]: 2025-10-09 10:01:26.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:01:27 compute-2 nova_compute[163961]: 2025-10-09 10:01:27.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:27.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:29 compute-2 ovn_controller[62794]: 2025-10-09T10:01:29Z|00047|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  9 10:01:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:31 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:01:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:31 compute-2 nova_compute[163961]: 2025-10-09 10:01:31.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:32 compute-2 nova_compute[163961]: 2025-10-09 10:01:32.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:34 compute-2 podman[170106]: 2025-10-09 10:01:34.204375559 +0000 UTC m=+0.039767583 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  9 10:01:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:34.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:36 compute-2 podman[170124]: 2025-10-09 10:01:36.206605273 +0000 UTC m=+0.042309866 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:01:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:36 compute-2 nova_compute[163961]: 2025-10-09 10:01:36.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:37 compute-2 nova_compute[163961]: 2025-10-09 10:01:37.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:37.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:01:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:39.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:01:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:41.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:41 compute-2 nova_compute[163961]: 2025-10-09 10:01:41.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:42 compute-2 nova_compute[163961]: 2025-10-09 10:01:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:42 compute-2 nova_compute[163961]: 2025-10-09 10:01:42.740 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:43 compute-2 podman[170148]: 2025-10-09 10:01:43.222383013 +0000 UTC m=+0.057436993 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  9 10:01:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:46 compute-2 nova_compute[163961]: 2025-10-09 10:01:46.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:47 compute-2 nova_compute[163961]: 2025-10-09 10:01:47.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:47.877 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:01:47 compute-2 nova_compute[163961]: 2025-10-09 10:01:47.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:47.878 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:01:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:49.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:49.880 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:01:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:51.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:51 compute-2 nova_compute[163961]: 2025-10-09 10:01:51.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:52 compute-2 nova_compute[163961]: 2025-10-09 10:01:52.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:52.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:53.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:54 compute-2 nova_compute[163961]: 2025-10-09 10:01:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:54 compute-2 podman[170208]: 2025-10-09 10:01:54.210463229 +0000 UTC m=+0.042985499 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  9 10:01:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:54.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:55 compute-2 nova_compute[163961]: 2025-10-09 10:01:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:55.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3931842824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.528 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.706 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.707 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5039MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.708 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.708 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.753 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.753 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.765 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:01:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/224353728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.105 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.108 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.126 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.127 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.128 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:01:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:01:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:01:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:01:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:01:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:59 compute-2 nova_compute[163961]: 2025-10-09 10:01:59.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:01:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:01:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:01:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:01:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:01:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:01:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:00 compute-2 nova_compute[163961]: 2025-10-09 10:02:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:01 compute-2 nova_compute[163961]: 2025-10-09 10:02:01.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:02 compute-2 nova_compute[163961]: 2025-10-09 10:02:02.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:05 compute-2 podman[170280]: 2025-10-09 10:02:05.207479189 +0000 UTC m=+0.038867715 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:02:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:05.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:06.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:06 compute-2 nova_compute[163961]: 2025-10-09 10:02:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:07 compute-2 podman[170298]: 2025-10-09 10:02:07.200444468 +0000 UTC m=+0.036174288 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 10:02:07 compute-2 nova_compute[163961]: 2025-10-09 10:02:07.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:07.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:08.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:02:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:02:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:02:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:02:11 compute-2 nova_compute[163961]: 2025-10-09 10:02:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:12 compute-2 nova_compute[163961]: 2025-10-09 10:02:12.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:12.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:14 compute-2 podman[170347]: 2025-10-09 10:02:14.214075784 +0000 UTC m=+0.051134932 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:02:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:16.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:16 compute-2 nova_compute[163961]: 2025-10-09 10:02:16.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:17 compute-2 nova_compute[163961]: 2025-10-09 10:02:17.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:17.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:18.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:19.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:21 compute-2 nova_compute[163961]: 2025-10-09 10:02:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:22 compute-2 nova_compute[163961]: 2025-10-09 10:02:22.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:22.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:23.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:25 compute-2 podman[170382]: 2025-10-09 10:02:25.202565969 +0000 UTC m=+0.038216835 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  9 10:02:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:25.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:26.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:26 compute-2 nova_compute[163961]: 2025-10-09 10:02:26.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:27 compute-2 nova_compute[163961]: 2025-10-09 10:02:27.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:28.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:29.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:30.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:31 compute-2 nova_compute[163961]: 2025-10-09 10:02:31.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:32 compute-2 nova_compute[163961]: 2025-10-09 10:02:32.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:02:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:02:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:33.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:36 compute-2 podman[170514]: 2025-10-09 10:02:36.198428412 +0000 UTC m=+0.034889762 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  9 10:02:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:36 compute-2 nova_compute[163961]: 2025-10-09 10:02:36.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:37 compute-2 nova_compute[163961]: 2025-10-09 10:02:37.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:37 compute-2 podman[170556]: 2025-10-09 10:02:37.914726197 +0000 UTC m=+0.036617530 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  9 10:02:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:02:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:41 compute-2 nova_compute[163961]: 2025-10-09 10:02:41.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:42 compute-2 nova_compute[163961]: 2025-10-09 10:02:42.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:45 compute-2 podman[170581]: 2025-10-09 10:02:45.223412135 +0000 UTC m=+0.059063250 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct  9 10:02:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:46 compute-2 nova_compute[163961]: 2025-10-09 10:02:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:47 compute-2 nova_compute[163961]: 2025-10-09 10:02:47.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:47.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602145734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:02:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:49.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:51 compute-2 nova_compute[163961]: 2025-10-09 10:02:51.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:52 compute-2 nova_compute[163961]: 2025-10-09 10:02:52.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:02:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:02:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:56.156 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:56.157 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.187 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.187 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:56 compute-2 podman[170640]: 2025-10-09 10:02:56.208533077 +0000 UTC m=+0.039808666 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct  9 10:02:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725321656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2491592431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.530 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.718 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.719 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5032MB free_disk=59.92198944091797GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.720 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.720 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.766 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.766 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.782 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:02:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2965295672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.123 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.127 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.139 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.140 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.141 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:02:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:02:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:02:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.142 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.142 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.186 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:02:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:02:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:02:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:02:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:59.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:01 compute-2 nova_compute[163961]: 2025-10-09 10:03:01.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:02.159 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:03:02 compute-2 nova_compute[163961]: 2025-10-09 10:03:02.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:02 compute-2 nova_compute[163961]: 2025-10-09 10:03:02.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:04 compute-2 nova_compute[163961]: 2025-10-09 10:03:04.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:06 compute-2 nova_compute[163961]: 2025-10-09 10:03:06.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:07 compute-2 podman[170714]: 2025-10-09 10:03:07.203298594 +0000 UTC m=+0.038252593 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  9 10:03:07 compute-2 nova_compute[163961]: 2025-10-09 10:03:07.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:08 compute-2 podman[170731]: 2025-10-09 10:03:08.212350655 +0000 UTC m=+0.048850043 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  9 10:03:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:08.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:10.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:11 compute-2 nova_compute[163961]: 2025-10-09 10:03:11.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:12 compute-2 nova_compute[163961]: 2025-10-09 10:03:12.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:12.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:16 compute-2 podman[170781]: 2025-10-09 10:03:16.228339284 +0000 UTC m=+0.060836883 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  9 10:03:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:16 compute-2 nova_compute[163961]: 2025-10-09 10:03:16.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:17 compute-2 nova_compute[163961]: 2025-10-09 10:03:17.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:17.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:19.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:20.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:21 compute-2 nova_compute[163961]: 2025-10-09 10:03:21.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:22 compute-2 nova_compute[163961]: 2025-10-09 10:03:22.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:22.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:24.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:26 compute-2 nova_compute[163961]: 2025-10-09 10:03:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:27 compute-2 podman[170815]: 2025-10-09 10:03:27.226529639 +0000 UTC m=+0.052968357 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true)
Oct  9 10:03:27 compute-2 nova_compute[163961]: 2025-10-09 10:03:27.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:27.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:29.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:31 compute-2 nova_compute[163961]: 2025-10-09 10:03:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:31.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:32 compute-2 nova_compute[163961]: 2025-10-09 10:03:32.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:33.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:34.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:36 compute-2 nova_compute[163961]: 2025-10-09 10:03:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:37 compute-2 nova_compute[163961]: 2025-10-09 10:03:37.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:37.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:38 compute-2 podman[170892]: 2025-10-09 10:03:38.054966348 +0000 UTC m=+0.042511292 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  9 10:03:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:39 compute-2 podman[170965]: 2025-10-09 10:03:39.246597094 +0000 UTC m=+0.081570354 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  9 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:03:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:39.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:41 compute-2 nova_compute[163961]: 2025-10-09 10:03:41.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:42 compute-2 nova_compute[163961]: 2025-10-09 10:03:42.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:03:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:43.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:45.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:46 compute-2 nova_compute[163961]: 2025-10-09 10:03:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:47 compute-2 podman[171015]: 2025-10-09 10:03:47.243667212 +0000 UTC m=+0.068110247 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 10:03:47 compute-2 nova_compute[163961]: 2025-10-09 10:03:47.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:49.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:51 compute-2 nova_compute[163961]: 2025-10-09 10:03:51.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:51.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:52 compute-2 nova_compute[163961]: 2025-10-09 10:03:52.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:03:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:54 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  9 10:03:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:03:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:55.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:03:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/604600001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.534 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:56.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.748 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5012MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.790 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.791 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.802 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:03:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:03:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2413348394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.161 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.164 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.175 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.176 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.176 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:03:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:57.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:58 compute-2 podman[171119]: 2025-10-09 10:03:58.218725651 +0000 UTC m=+0.048903464 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  9 10:03:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:58.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.187 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.188 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.188 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:03:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:03:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:03:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:03:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:03:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:03:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:04:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:00.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:01 compute-2 nova_compute[163961]: 2025-10-09 10:04:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:01.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:02 compute-2 nova_compute[163961]: 2025-10-09 10:04:02.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:02 compute-2 nova_compute[163961]: 2025-10-09 10:04:02.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:02.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:03.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:04 compute-2 nova_compute[163961]: 2025-10-09 10:04:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:05 compute-2 nova_compute[163961]: 2025-10-09 10:04:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:05 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:05.002 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:04:05 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:05.002 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:04:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:06 compute-2 nova_compute[163961]: 2025-10-09 10:04:06.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:07 compute-2 nova_compute[163961]: 2025-10-09 10:04:07.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:08 compute-2 podman[171147]: 2025-10-09 10:04:08.204371822 +0000 UTC m=+0.039237289 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:04:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:10 compute-2 podman[171166]: 2025-10-09 10:04:10.202411815 +0000 UTC m=+0.036284672 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct  9 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.282 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.282 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.283 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct  9 10:04:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  9 10:04:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct  9 10:04:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  9 10:04:11 compute-2 nova_compute[163961]: 2025-10-09 10:04:11.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:12 compute-2 nova_compute[163961]: 2025-10-09 10:04:12.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:12.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:13.004 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:04:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:16 compute-2 nova_compute[163961]: 2025-10-09 10:04:16.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:17 compute-2 nova_compute[163961]: 2025-10-09 10:04:17.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:18 compute-2 podman[171216]: 2025-10-09 10:04:18.220600961 +0000 UTC m=+0.055982621 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:04:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:19.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:21 compute-2 nova_compute[163961]: 2025-10-09 10:04:21.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:21.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:22 compute-2 nova_compute[163961]: 2025-10-09 10:04:22.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:24.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:26.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:26 compute-2 nova_compute[163961]: 2025-10-09 10:04:26.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:27 compute-2 nova_compute[163961]: 2025-10-09 10:04:27.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:29 compute-2 podman[171250]: 2025-10-09 10:04:29.220178081 +0000 UTC m=+0.050791785 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  9 10:04:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:29.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:31 compute-2 nova_compute[163961]: 2025-10-09 10:04:31.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:32 compute-2 nova_compute[163961]: 2025-10-09 10:04:32.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:34.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.845092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275845110, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2360, "num_deletes": 251, "total_data_size": 6086491, "memory_usage": 6183008, "flush_reason": "Manual Compaction"}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275853889, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3946077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26060, "largest_seqno": 28415, "table_properties": {"data_size": 3936844, "index_size": 5727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19559, "raw_average_key_size": 20, "raw_value_size": 3918125, "raw_average_value_size": 4051, "num_data_blocks": 252, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004069, "oldest_key_time": 1760004069, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8827 microseconds, and 5918 cpu microseconds.
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853917) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3946077 bytes OK
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853932) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854301) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854313) EVENT_LOG_v1 {"time_micros": 1760004275854309, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854325) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6076116, prev total WAL file size 6076116, number of live WAL files 2.
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3853KB)], [51(11MB)]
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275855192, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16285875, "oldest_snapshot_seqno": -1}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5800 keys, 14126839 bytes, temperature: kUnknown
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275894185, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14126839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14087618, "index_size": 23623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147428, "raw_average_key_size": 25, "raw_value_size": 13982218, "raw_average_value_size": 2410, "num_data_blocks": 962, "num_entries": 5800, "num_filter_entries": 5800, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894507) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14126839 bytes
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.895292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 416.0 rd, 360.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6316, records dropped: 516 output_compression: NoCompression
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.895312) EVENT_LOG_v1 {"time_micros": 1760004275895301, "job": 30, "event": "compaction_finished", "compaction_time_micros": 39153, "compaction_time_cpu_micros": 20367, "output_level": 6, "num_output_files": 1, "total_output_size": 14126839, "num_input_records": 6316, "num_output_records": 5800, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275896520, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275898425, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:04:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:36 compute-2 nova_compute[163961]: 2025-10-09 10:04:36.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:37 compute-2 nova_compute[163961]: 2025-10-09 10:04:37.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:37.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:39 compute-2 podman[171302]: 2025-10-09 10:04:39.206390727 +0000 UTC m=+0.038663903 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  9 10:04:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:39.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:41 compute-2 podman[171321]: 2025-10-09 10:04:41.205476816 +0000 UTC m=+0.041339126 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  9 10:04:41 compute-2 nova_compute[163961]: 2025-10-09 10:04:41.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:41.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:42 compute-2 nova_compute[163961]: 2025-10-09 10:04:42.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:04:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:43.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:46 compute-2 nova_compute[163961]: 2025-10-09 10:04:46.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:47 compute-2 nova_compute[163961]: 2025-10-09 10:04:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:48.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:49 compute-2 podman[171450]: 2025-10-09 10:04:49.22434645 +0000 UTC m=+0.058230454 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  9 10:04:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:04:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:04:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:51 compute-2 nova_compute[163961]: 2025-10-09 10:04:51.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:51.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:52 compute-2 nova_compute[163961]: 2025-10-09 10:04:52.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:54 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:04:54 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:04:54 compute-2 systemd-logind[800]: New session 40 of user zuul.
Oct  9 10:04:54 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:04:54 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:04:54 compute-2 systemd[171508]: Queued start job for default target Main User Target.
Oct  9 10:04:54 compute-2 systemd[171508]: Created slice User Application Slice.
Oct  9 10:04:54 compute-2 systemd[171508]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:04:54 compute-2 systemd[171508]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:04:54 compute-2 systemd[171508]: Reached target Paths.
Oct  9 10:04:54 compute-2 systemd[171508]: Reached target Timers.
Oct  9 10:04:54 compute-2 systemd[171508]: Starting D-Bus User Message Bus Socket...
Oct  9 10:04:54 compute-2 systemd[171508]: Starting Create User's Volatile Files and Directories...
Oct  9 10:04:54 compute-2 systemd[171508]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:04:54 compute-2 systemd[171508]: Finished Create User's Volatile Files and Directories.
Oct  9 10:04:54 compute-2 systemd[171508]: Reached target Sockets.
Oct  9 10:04:54 compute-2 systemd[171508]: Reached target Basic System.
Oct  9 10:04:54 compute-2 systemd[171508]: Reached target Main User Target.
Oct  9 10:04:54 compute-2 systemd[171508]: Startup finished in 98ms.
Oct  9 10:04:54 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:04:54 compute-2 systemd[1]: Started Session 40 of User zuul.
Oct  9 10:04:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:04:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:04:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:56 compute-2 nova_compute[163961]: 2025-10-09 10:04:56.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:04:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:04:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/968507340' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:04:57 compute-2 nova_compute[163961]: 2025-10-09 10:04:57.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.191 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3527318548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.538 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.745 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4938MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:04:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.802 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.802 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.829 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.849 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.849 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.864 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.893 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.910 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:04:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:04:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3280518930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.250 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.254 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.269 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.270 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.270 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:04:59 compute-2 podman[171840]: 2025-10-09 10:04:59.339744153 +0000 UTC m=+0.050071924 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct  9 10:04:59 compute-2 ovs-vsctl[171866]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:04:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:04:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:04:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:04:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:04:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:05:00 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:05:00 compute-2 lvm[172157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:05:00 compute-2 lvm[172157]: VG ceph_vg0 finished
Oct  9 10:05:00 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:05:00 compute-2 kernel: block loop3: the capability attribute has been deprecated.
Oct  9 10:05:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:00 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:00 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  9 10:05:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:05:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:05:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3065331311' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:05:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:05:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912654270' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:05:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: ops {prefix=ops} (starting...)
Oct  9 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/900548494' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3249911151' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2389407953' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:05:02 compute-2 nova_compute[163961]: 2025-10-09 10:05:02.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:05:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: status {prefix=status} (starting...)
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1391361791' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1337245173' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct  9 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2101833220' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  9 10:05:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:04 compute-2 nova_compute[163961]: 2025-10-09 10:05:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:04 compute-2 nova_compute[163961]: 2025-10-09 10:05:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:05:05 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3772235470' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:05:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:05:05 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1482807627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:05:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 573440 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.850441 1 0.000094
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003642 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.004743 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.005029 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.942181 1 0.000090
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004360 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.007383 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001338005s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.714279175s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.007491 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] exit Reset 0.000090 1 0.000136
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000436783s) [0] async=[0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713455200s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] exit Reset 0.000162 1 0.000229
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] exit Start 0.000013 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.903509 1 0.000266
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004309 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.006738 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.006772 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000190735s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713485718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] exit Reset 0.000097 1 0.000167
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] exit Start 0.000040 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.981936 1 0.000073
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005744 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.009033 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.009317 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995488167s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.709838867s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] exit Reset 0.000211 1 0.000679
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] exit Start 0.000231 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 565248 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.557827 6 0.000151
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.558294 6 0.000097
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.556706 6 0.000712
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.558587 6 0.000627
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.686045 54 0.000267
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.697591 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.675503 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.675543 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311874390s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 153.583740234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] exit Reset 0.000058 1 0.000105
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.845861 42 0.000512
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.848990 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 19.851812 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 19.851854 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.154195786s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.426406860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.847647 42 0.000395
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] exit Reset 0.000275 1 0.000411
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.849865 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 19.851681 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 19.851746 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] exit Start 0.000254 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.152080536s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.424911499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001640 2 0.000063
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] exit Reset 0.000225 1 0.000588
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] exit Start 0.000008 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002328 2 0.000068
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002202 2 0.000069
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002296 2 0.000062
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.061447 2 0.000456
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063121 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.621055 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 483328 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112869 2 0.000532
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115282 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.673627 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.134904 2 0.000355
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137171 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.694222 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.154018 3 0.000357
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.154320 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000056 1 0.000078
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.154429 3 0.000043
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.154480 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000203 1 0.000255
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.155907 7 0.000065
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000200 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001603 2 0.000036
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000638 1 0.000042
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000395 2 0.000609
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171546 5 0.000137
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173987 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.732623 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 DELETING pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.024252 1 0.000136
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.024988 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.180932 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1482752 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638583 data_alloc: 218103808 data_used: 4096
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882714272s of 10.004929543s, submitted: 119
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001099 3 0.000083
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002799 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001411 3 0.000045
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001943 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.002807 5 0.000274
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000485 1 0.000080
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002964 5 0.000670
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000397 1 0.000023
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.133021 2 0.000066
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.133333 1 0.000036
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000564 1 0.000141
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038030 2 0.000058
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fcf38000/0x0/0x4ffc00000, data 0x82dee/0xf2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1458176 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.864070 1 0.000301
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001121 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.003948 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.003982 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001540184s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432449341s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] exit Reset 0.000272 1 0.000353
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] exit Start 0.000119 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.825928 1 0.000102
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001542 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.003736 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.004041 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001205444s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432983398s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] exit Reset 0.000551 1 0.000986
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] exit Start 0.000370 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1417216 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.221642 6 0.000514
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.223026 6 0.000474
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001592 2 0.000056
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001551 2 0.000032
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 DELETING pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.071402 2 0.000291
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.073054 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.295164 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 DELETING pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.123095 2 0.000252
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124757 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.347976 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1409024 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.168831 61 0.000190
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.174819 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.016765 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.016964 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830674171s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 active pruub 164.427581787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.171619 57 0.000427
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.174710 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 24.175102 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 24.176533 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827572823s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 active pruub 157.424896240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] exit Reset 0.000396 1 0.000701
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] exit Start 0.000058 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] exit Reset 0.000110 1 0.000454
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] exit Start 0.000068 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 1400832 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.827971 3 0.000367
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.828133 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.827930 3 0.000298
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.828167 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000084 1 0.000125
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000383 1 0.000467
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000092 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002518 2 0.000236
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000065 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003830 2 0.000041
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000072 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 1343488 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637492 data_alloc: 218103808 data_used: 4096
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003182 3 0.000179
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007196 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004316 3 0.000177
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007016 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002040 5 0.000678
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000079 1 0.000061
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001232 5 0.001238
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.064307 1 0.000072
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014249 2 0.000086
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.078522 1 0.000027
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.029252 1 0.000128
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028369 2 0.000086
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1277952 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.874780 1 0.000143
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013432 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020504 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020651 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.932686 1 0.000094
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013689 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020915 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020944 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988700867s) [0] async=[0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.435379028s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.988006592s) [0] async=[0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.434722900s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] exit Reset 0.000231 1 0.000335
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] exit Start 0.000042 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] exit Reset 0.000620 1 0.000678
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] exit Start 0.000148 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fcf2d000/0x0/0x4ffc00000, data 0x8d206/0xfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011785 7 0.000166
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000075 1 0.000129
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011416 7 0.000313
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000076
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038588 2 0.000393
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038760 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.050696 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053156 2 0.000230
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053293 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.064961 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x910ca/0x102000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637173 data_alloc: 218103808 data_used: 8192
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.072108269s of 10.152162552s, submitted: 64
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 1343488 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 85 handle_osd_map epochs [86,87], i have 85, src has [1,87]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.378818 51 0.000244
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.382150 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 23.383667 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 23.383914 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621111870s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 active pruub 167.427932739s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] exit Reset 0.000089 2 0.000149
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] exit Start 0.000007 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.383866 54 0.000305
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.386753 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 24.141847 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 24.141882 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616195679s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 active pruub 166.424423218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] exit Reset 0.000079 2 0.000135
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 87 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1277952 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015215 3 0.000071
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.015252 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000087 1 0.000122
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000060
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017844 3 0.000043
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.017983 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000917 1 0.001125
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.001249 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000168 1 0.001426
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000042 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 1261568 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fcb0f000/0x0/0x4ffc00000, data 0x97642/0x10b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991428 4 0.003359
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994948 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999107 4 0.000088
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999257 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001962 5 0.000609
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000175 1 0.000201
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002492 5 0.000281
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001214 1 0.000034
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049589 2 0.000086
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.049921 1 0.000031
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.036399 1 0.000084
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035438 2 0.000113
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 655345 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.889574 1 0.000106
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014287 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.013619 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.013806 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.961952 1 0.000169
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015326 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010304 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.011626 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987930298s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.825469971s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986706734s) [0] async=[0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.824310303s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] exit Reset 0.000138 1 0.000353
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] exit Start 0.000007 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] exit Reset 0.000733 1 0.001178
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] exit Start 0.000113 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010368 7 0.000098
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009759 7 0.000259
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000075
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000160
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046093 2 0.000858
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046197 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056147 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097928 2 0.000127
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098135 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.108580 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1187840 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcb06000/0x0/0x4ffc00000, data 0x9d582/0x113000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1187840 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1114112 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 644173 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcb09000/0x0/0x4ffc00000, data 0x9d582/0x113000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228099823s of 10.284918785s, submitted: 46
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 92 ms_handle_reset con 0x55bdd4b71800 session 0x55bdd6f7cd20
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 1097728 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f(unlocked)] enter Initial
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000028
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000042
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f(unlocked)] enter Initial
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000032
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000074 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000224
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000053 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000083 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000649 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007822 2 0.000283
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.008127 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.008274 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000091 1 0.000140
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.008689 2 0.000570
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.009923 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.009956 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000098 1 0.000700
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fcafd000/0x0/0x4ffc00000, data 0xa3a32/0x11c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1056768 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006523 6 0.000045
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.005585 6 0.000118
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003200 3 0.000093
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000068 1 0.000060
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035719 1 0.000069
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038703 3 0.000302
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000069 1 0.000060
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052673 1 0.000028
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1089536 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687609 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.921376 1 0.000024
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.012909 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.018605 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.974420 1 0.000019
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.013499 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020054 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000086 1 0.000143
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000040
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000195 1 0.000260
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000052 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000265
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=41
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=41
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001480 3 0.000251
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001782 3 0.000100
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1073152 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 97 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002478 2 0.000150
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004141 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002630 2 0.000149
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004571 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001269 4 0.000158
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001416 4 0.000108
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1040384 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1032192 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaf1000/0x0/0x4ffc00000, data 0xaba5e/0x12a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1032192 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaf1000/0x0/0x4ffc00000, data 0xaba5e/0x12a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 1015808 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 697254 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022220612s of 10.090888023s, submitted: 62
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10(unlocked)] enter Initial
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000030
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000041 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000195 1 0.000079
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000243 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 1015808 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.018166 2 0.000059
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.018439 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.018601 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000200
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 974848 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 933888 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.624711 5 0.000734
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002739 4 0.000103
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000077 1 0.000103
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014579 1 0.000032
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.381344 1 0.000034
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.398855 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.023599 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000072 1 0.000122
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000080
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002114 3 0.000068
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000031 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 909312 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 102 handle_osd_map epochs [101,103], i have 102, src has [1,103]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 102 handle_osd_map epochs [102,103], i have 103, src has [1,103]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12(unlocked)] enter Initial
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=0 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000123 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=0 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000241
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000190 1 0.000051
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003786 2 0.000144
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006069 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002140 3 0.000234
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 901120 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 725737 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.999621 2 0.000061
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.999896 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.999919 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000070 1 0.000118
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcae0000/0x0/0x4ffc00000, data 0xb5cd6/0x139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 827392 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.471482 5 0.000042
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 57.050686 132 0.002046
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 57.054232 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 58.057858 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 58.057936 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949655533s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 active pruub 197.427154541s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] exit Reset 0.000550 1 0.000960
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] exit Start 0.000039 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003429 4 0.000296
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000034 1 0.000052
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 745472 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028621 1 0.000022
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.555556 3 0.000330
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.556372 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.525361 1 0.000029
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.558147 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029727 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000727
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000188 1 0.000840
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000049 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000145
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000318 1 0.000304
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000270 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000279 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=26
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=26
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001665 3 0.000059
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 778240 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996741 2 0.000077
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998800 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [106,107], i have 107, src has [1,107]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997452 4 0.001677
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999353 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.606948 116 0.000474
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.609836 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 49.612684 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 49.612712 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393718719s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 active pruub 199.428039551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] exit Reset 0.000073 1 0.000128
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003316 3 0.000297
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.005944 5 0.000570
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001095 1 0.000101
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000348 1 0.000037
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.064291 2 0.000066
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.328914 1 0.000184
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.400946 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.400347 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.400446 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604859352s) [1] async=[1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 40'1059 active pruub 200.039978027s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] exit Reset 0.000189 1 0.000275
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] exit Start 0.000055 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.401337 3 0.000037
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.401403 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000137 1 0.000207
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000048
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000060 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 745472 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003088 4 0.000157
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003882 6 0.000336
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003515 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001546 2 0.000256
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002438 5 0.001319
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000096 1 0.000076
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000389 1 0.000105
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 DELETING pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050869 2 0.000180
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.052488 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056661 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 712704 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744746 data_alloc: 218103808 data_used: 12288
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.079876 2 0.000055
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.385285378s of 10.488866806s, submitted: 116
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.923657 1 0.000078
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006782 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010544 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.010572 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995595932s) [1] async=[1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 40'1059 active pruub 201.442153931s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] exit Reset 0.000101 1 0.000148
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] exit Start 0.000016 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 704512 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xc3e18/0x14e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 638976 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139639 6 0.000120
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000658 1 0.000064
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 DELETING pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050865 3 0.000165
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.051595 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.191287 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcace000/0x0/0x4ffc00000, data 0xc3e18/0x14e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 1662976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 1662976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xc5dba/0x151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 1777664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746862 data_alloc: 218103808 data_used: 16384
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 1777664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 1753088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 1744896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xc7ea6/0x154000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 1736704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1720320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759218 data_alloc: 218103808 data_used: 16384
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xcc07e/0x15a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 1671168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.010528564s of 10.088379860s, submitted: 33
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 1638400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xd0256/0x160000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 116, src has [1,118]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1589248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777465 data_alloc: 218103808 data_used: 24576
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1556480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789067 data_alloc: 218103808 data_used: 40960
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab0000/0x0/0x4ffc00000, data 0xd813e/0x16c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1531904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005482674s of 10.049218178s, submitted: 44
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1515520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xdc316/0x172000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1458176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811691 data_alloc: 218103808 data_used: 45056
Oct  9 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1449984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa0000/0x0/0x4ffc00000, data 0xe23ae/0x17b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1417216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e(unlocked)] enter Initial
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000048
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000161 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1392640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820599 data_alloc: 218103808 data_used: 53248
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.767282 2 0.000052
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.767477 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.767500 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000084 1 0.000131
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1376256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1343488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.973302841s of 11.008710861s, submitted: 32
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.920757 5 0.000039
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.663154 94 0.002040
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.664701 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 46.669357 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 46.669400 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Reset 0.000316 1 0.000677
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Start 0.000047 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002385 4 0.000096
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000045
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035652 1 0.000093
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069951 1 0.000072
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.108230 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029027 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000197 1 0.000281
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.107543 3 0.000118
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.107638 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000128 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000555
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000220
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Oct  9 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001188 3 0.000093
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0xe8562/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1318912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015862 4 0.000066
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015980 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014927 2 0.000066
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016367 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001208 3 0.000092
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.188305 5 0.000535
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000101 1 0.000097
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000646 1 0.000096
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.069841 2 0.000086
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xec658/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.466386 1 0.000109
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.725557 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.741565 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.741587 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Reset 0.000091 1 0.000158
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Start
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started/Stray
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1302528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7274f00
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007853 7 0.000095
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000067 1 0.000098
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 DELETING pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038244 2 0.000163
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038378 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046311 0 0.000000
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1212416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1171456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1155072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1146880 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1138688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1130496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1105920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6ded0e0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd6d6f2c0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1089536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1064960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.992691040s of 25.019613266s, submitted: 36
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835106 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1015808 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1007616 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835778 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.494864464s of 10.498138428s, submitted: 2
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 991232 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd5d7f0e0
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 909312 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 827392 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.944450378s of 85.945846558s, submitted: 1
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 573440 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd5d7ed20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.127048492s of 54.128883362s, submitted: 1
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 270336 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 147456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 32768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 24576 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1007616 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 991232 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 983040 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 966656 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 958464 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 917504 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 827392 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 761856 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 753664 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 704512 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 655360 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 630784 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 622592 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 606208 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 573440 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s#012Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.829528809s of 215.830673218s, submitted: 1
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 90112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 933888 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 925696 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 843776 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 720896 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 581632 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 516096 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 466944 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 344064 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 245760 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 188416 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 81920 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 497.047088623s of 497.167572021s, submitted: 220
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 65536 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929854 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c6d20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe0e000/0x0/0x4ffc00000, data 0xd68845/0xe0c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c6f00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942313 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.789459229s of 10.829751968s, submitted: 37
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75c74a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd75c7860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd50c8c00 session 0x55bdd75c7a40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c7c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c7e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75dbe00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973607 data_alloc: 234881024 data_used: 11530240
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.399969101s of 31.402891159s, submitted: 12
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8ddc000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd50c8800 session 0x55bdd8ddc3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8ddd2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8ddda40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8ddcd20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8de45a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd8de4780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051961 data_alloc: 234881024 data_used: 11530240
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8de4960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8de4b40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.508440018s of 16.597480774s, submitted: 91
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97c9000/0x0/0x4ffc00000, data 0x21f8bd5/0x22a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 4489216 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207958 data_alloc: 234881024 data_used: 17833984
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a4000/0x0/0x4ffc00000, data 0x222bbd5/0x22d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207054 data_alloc: 234881024 data_used: 17838080
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a1000/0x0/0x4ffc00000, data 0x222ebd5/0x22db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207662 data_alloc: 234881024 data_used: 17899520
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.325393677s of 13.400735855s, submitted: 135
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207886 data_alloc: 234881024 data_used: 17899520
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd75db4a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 5431296 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75db860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd75c6d20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d63c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df4000/0x0/0x4ffc00000, data 0x2bdac37/0x2c88000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7cd20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14303232 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283605 data_alloc: 234881024 data_used: 17903616
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14295040 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7c3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6d683c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.297891617s of 11.330360413s, submitted: 36
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6d68960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 9428992 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df3000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350314 data_alloc: 251658240 data_used: 27828224
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350570 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df2000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 6965 writes, 28K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6965 writes, 1430 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 985 writes, 2603 keys, 985 commit groups, 1.0 writes per commit group, ingest: 2.82 MB, 0.00 MB/s#012Interval WAL: 985 writes, 447 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memt
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 5423104 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.703389168s of 11.709489822s, submitted: 7
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 4284416 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1406758 data_alloc: 251658240 data_used: 27897856
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405238 data_alloc: 251658240 data_used: 27901952
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.993879318s of 10.153537750s, submitted: 296
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d69680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 11018240 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be3e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f873a000/0x0/0x4ffc00000, data 0x3293c5a/0x3342000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8de4f00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101974016 unmapped: 18743296 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd7b5cf00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6decd20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d6fa40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6a000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7d680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.631416321s of 24.683015823s, submitted: 93
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6d6f2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd4a30780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75cf0e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd4fc1c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd755fa40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095448 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd755e5a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 29204480 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 23945216 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 23912448 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.983060837s of 14.016182899s, submitted: 42
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 14106624 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265302 data_alloc: 234881024 data_used: 21835776
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 14516224 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.858042717s of 12.936762810s, submitted: 133
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260302 data_alloc: 234881024 data_used: 21839872
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e7000/0x0/0x4ffc00000, data 0x22d9bb2/0x2385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd755ef00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 14434304 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd7b5d680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.193244934s of 29.216753006s, submitted: 42
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be32c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc3e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6e230e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd6e62000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd4fc3c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 28540928 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd4a334a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd4fc34a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc2780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7b5c3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.317389488s of 10.355058670s, submitted: 47
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd74754a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101553 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.569676399s of 10.576947212s, submitted: 6
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 18022400 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x2123bd5/0x21d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250407 data_alloc: 234881024 data_used: 18644992
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f947a000/0x0/0x4ffc00000, data 0x2145bd5/0x21f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245743 data_alloc: 234881024 data_used: 18653184
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.912234306s of 12.987756729s, submitted: 133
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245823 data_alloc: 234881024 data_used: 18653184
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x214bbd5/0x21f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fa40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1a40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac7000/0x0/0x4ffc00000, data 0x2af7c37/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326222 data_alloc: 234881024 data_used: 18653184
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac4000/0x0/0x4ffc00000, data 0x2afac37/0x2ba8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079881668s of 10.119346619s, submitted: 44
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd7ae1c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332340 data_alloc: 234881024 data_used: 18657280
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1399068 data_alloc: 251658240 data_used: 28499968
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.117259026s of 11.127627373s, submitted: 16
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 4317184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484586 data_alloc: 251658240 data_used: 28954624
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490334 data_alloc: 251658240 data_used: 29265920
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f66000/0x0/0x4ffc00000, data 0x3657c5a/0x3706000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd75b50e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d800 session 0x55bdd6d71680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fc20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260722 data_alloc: 234881024 data_used: 18653184
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.852729797s of 10.956905365s, submitted: 182
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7ed20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd75b5c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd6d6ed20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd6d62000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6b883c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7c780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6e625a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 17555456 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd7ae0f00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.988206863s of 19.015766144s, submitted: 47
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6d6b2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6e225a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75ced20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd75cef00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6d6af00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0de000/0x0/0x4ffc00000, data 0x14e2b60/0x158e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd75cf680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 24510464 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 23560192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.775194168s of 17.811866760s, submitted: 39
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 22888448 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267353 data_alloc: 234881024 data_used: 19775488
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72741e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6af00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75c6960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75b41e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.414363861s of 20.486953735s, submitted: 115
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba7c00 session 0x55bdd4a30780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd75b5c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316636 data_alloc: 234881024 data_used: 19775488
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7220960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4fc21e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 18440192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd7b5cd20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd7b5d0e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 16072704 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1363761 data_alloc: 234881024 data_used: 26726400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.219516754s of 11.251511574s, submitted: 40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1364569 data_alloc: 234881024 data_used: 26726400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd0000/0x0/0x4ffc00000, data 0x27edbe5/0x289b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14245888 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 13918208 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134766592 unmapped: 8675328 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 7405568 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.565647125s of 11.631405830s, submitted: 106
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 7323648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 7282688 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.583915710s of 24.589715958s, submitted: 15
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 7012352 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 6979584 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.233993530s of 15.235481262s, submitted: 1
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1449663 data_alloc: 251658240 data_used: 27832320
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd5d7d2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd5d7cf00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7b5d2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 12156928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269843 data_alloc: 234881024 data_used: 19759104
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cfe00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd7275c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd7ae10e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72745a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7475e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6f7d2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75ce000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.875421524s of 23.928354263s, submitted: 95
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cf860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6dec000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125640704 unmapped: 28368896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd755e000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd755e5a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135665 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6f2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6d6fc20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6ef00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e3000/0x0/0x4ffc00000, data 0x14ddbb2/0x1589000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 28311552 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191655 data_alloc: 234881024 data_used: 19226624
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d6f0e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6f7c780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4e094a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd7221860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75da780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.361354828s of 19.447809219s, submitted: 108
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cf680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7ae1860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6be23c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be2b40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be3e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116223 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6dec000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6ded0e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec5a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd4a30780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6d6a3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7275e00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146352 data_alloc: 234881024 data_used: 15314944
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd4a33a40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.317052841s of 24.347005844s, submitted: 33
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6be2b40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6dec000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136711 data_alloc: 234881024 data_used: 11534336
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.554158211s of 14.583094597s, submitted: 35
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f90f1000/0x0/0x4ffc00000, data 0x20bfbb2/0x216b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 21413888 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279137 data_alloc: 234881024 data_used: 15622144
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9062000/0x0/0x4ffc00000, data 0x214ebb2/0x21fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278745 data_alloc: 234881024 data_used: 15638528
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9041000/0x0/0x4ffc00000, data 0x216fbb2/0x221b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.235909462s of 11.308976173s, submitted: 148
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75b4b40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d743c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6e221e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6e780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6d6eb40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.061567307s of 17.076330185s, submitted: 25
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4a301e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd74752c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0d400 session 0x55bdd4a305a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75b5c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cfc20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6decb40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7b5c780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2000 session 0x55bdd58a8f00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 33587200 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172702 data_alloc: 234881024 data_used: 9961472
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6e23a40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 33554432 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234717 data_alloc: 234881024 data_used: 18628608
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4a330e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd75ce3c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 30253056 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd5d7f860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2c00 session 0x55bdd72214a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.769443512s of 12.800458908s, submitted: 35
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7ae0f00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6d6b860
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd7474b40
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd74750e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3000 session 0x55bdd4fc23c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319022 data_alloc: 234881024 data_used: 18628608
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133038080 unmapped: 25174016 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 26550272 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e22d20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87c8000/0x0/0x4ffc00000, data 0x29e5b80/0x2a93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 26525696 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.509668350s of 14.591269493s, submitted: 121
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 11067392 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566109 data_alloc: 251658240 data_used: 29638656
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.815006256s of 15.883452415s, submitted: 119
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d75c20
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7274000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd4e09680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323675 data_alloc: 234881024 data_used: 18726912
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd57dfe00
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd57df680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd58a9680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8031000/0x0/0x4ffc00000, data 0x1fddb70/0x208a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.839834213s of 21.894390106s, submitted: 91
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d70960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4a30780
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd6f7c5a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd7ae12c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75b52c0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e230e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd7b5c960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7b5d680
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4e085a0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 26050560 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24846336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264737 data_alloc: 234881024 data_used: 18120704
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275377 data_alloc: 234881024 data_used: 19714048
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.853632927s of 12.886682510s, submitted: 37
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 16121856 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6fd9000/0x0/0x4ffc00000, data 0x1e97b73/0x1f43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346571 data_alloc: 234881024 data_used: 20361216
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345475 data_alloc: 234881024 data_used: 20361216
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.167773247s of 11.241725922s, submitted: 103
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4e08960
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd755e000
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd745cc00 session 0x55bdd5d7f0e0
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}'
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}'
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}'
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 23601152 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}'
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  9 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct  9 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 23617536 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct  9 10:05:06 compute-2 ceph-osd[11347]: do_command 'log dump' '{prefix=log dump}'
Oct  9 10:05:06 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:05:06 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2884084467' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:05:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:06 compute-2 nova_compute[163961]: 2025-10-09 10:05:06.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:05:07 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580093067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:05:07 compute-2 nova_compute[163961]: 2025-10-09 10:05:07.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct  9 10:05:07 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/68246502' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/933132204' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  9 10:05:08 compute-2 nova_compute[163961]: 2025-10-09 10:05:08.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911082628' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2179929761' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1308911326' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567861671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  9 10:05:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct  9 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2020451013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  9 10:05:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/702201220' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1411436454' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3524500067' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4143261129' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/884900180' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct  9 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1984048702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  9 10:05:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1673871356' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-2 podman[173974]: 2025-10-09 10:05:10.249155663 +0000 UTC m=+0.083749282 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct  9 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/705132865' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct  9 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct  9 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/310330597' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:10.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:11 compute-2 systemd[1]: Starting Hostname Service...
Oct  9 10:05:11 compute-2 systemd[1]: Started Hostname Service.
Oct  9 10:05:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct  9 10:05:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784783823' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct  9 10:05:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049596796' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:11 compute-2 nova_compute[163961]: 2025-10-09 10:05:11.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct  9 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/499448165' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct  9 10:05:12 compute-2 podman[174321]: 2025-10-09 10:05:12.272610951 +0000 UTC m=+0.104535964 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct  9 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3642453026' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-2 nova_compute[163961]: 2025-10-09 10:05:12.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1009466217' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct  9 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4150804299' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct  9 10:05:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:12.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct  9 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2364528424' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct  9 10:05:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct  9 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2788345293' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct  9 10:05:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:05:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5509 writes, 29K keys, 5509 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s#012Cumulative WAL: 5509 writes, 5509 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1539 writes, 7583 keys, 1539 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s#012Interval WAL: 1539 writes, 1539 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    374.2      0.12              0.08        15    0.008       0      0       0.0       0.0#012  L6      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    396.8    340.3      0.53              0.28        14    0.038     72K   7335       0.0       0.0#012 Sum      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    325.4    346.4      0.64              0.36        29    0.022     72K   7335       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    354.9    361.8      0.21              0.12        10    0.021     30K   2536       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    396.8    340.3      0.53              0.28        14    0.038     72K   7335       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    377.1      0.11              0.08        14    0.008       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.042, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 0.6 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 16.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(937,15.99 MB,5.25854%) FilterBlock(29,216.98 KB,0.0697036%) IndexBlock(29,378.44 KB,0.121568%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  9 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct  9 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3255535698' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct  9 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct  9 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2029825744' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct  9 10:05:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct  9 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct  9 10:05:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct  9 10:05:15 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3292770511' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct  9 10:05:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct  9 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1059604019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  9 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct  9 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855735972' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct  9 10:05:16 compute-2 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  9 10:05:16 compute-2 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  9 10:05:16 compute-2 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  9 10:05:16 compute-2 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  9 10:05:16 compute-2 kernel: cfg80211: failed to load regulatory.db
Oct  9 10:05:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 10:05:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 10:05:16 compute-2 nova_compute[163961]: 2025-10-09 10:05:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct  9 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3703601036' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct  9 10:05:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:17 compute-2 nova_compute[163961]: 2025-10-09 10:05:17.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct  9 10:05:17 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2006132426' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:18 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct  9 10:05:18 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/260578882' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct  9 10:05:18 compute-2 ovs-appctl[176173]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-2 ovs-appctl[176180]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-2 ovs-appctl[176186]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct  9 10:05:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct  9 10:05:19 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/133937591' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct  9 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct  9 10:05:19 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1514504317' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct  9 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:19 compute-2 podman[176500]: 2025-10-09 10:05:19.586782915 +0000 UTC m=+0.119336652 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  9 10:05:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:20.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:05:20 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3265058752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:05:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct  9 10:05:20 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1079050608' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct  9 10:05:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct  9 10:05:21 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1461598608' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:21 compute-2 nova_compute[163961]: 2025-10-09 10:05:21.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct  9 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/697504947' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct  9 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3640841864' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-2 nova_compute[163961]: 2025-10-09 10:05:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct  9 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/298067314' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3260254795' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct  9 10:05:23 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/207342957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:24.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2843286844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct  9 10:05:25 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/426215262' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2278152414' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:26.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct  9 10:05:26 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2375740516' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct  9 10:05:26 compute-2 nova_compute[163961]: 2025-10-09 10:05:26.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:27 compute-2 nova_compute[163961]: 2025-10-09 10:05:27.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:27 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:05:27 compute-2 systemd[1]: Starting Time & Date Service...
Oct  9 10:05:27 compute-2 systemd[1]: Started Time & Date Service.
Oct  9 10:05:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct  9 10:05:27 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2437616287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct  9 10:05:28 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3010218255' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:28.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:30 compute-2 podman[178579]: 2025-10-09 10:05:30.239704509 +0000 UTC m=+0.072514870 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:05:30 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct  9 10:05:30 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242952019' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct  9 10:05:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:30.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:31 compute-2 nova_compute[163961]: 2025-10-09 10:05:31.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:32 compute-2 nova_compute[163961]: 2025-10-09 10:05:32.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:32.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:36.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:36.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:36 compute-2 nova_compute[163961]: 2025-10-09 10:05:36.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:37 compute-2 nova_compute[163961]: 2025-10-09 10:05:37.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:38.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:38.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:40.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:40.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:41 compute-2 podman[178638]: 2025-10-09 10:05:41.209545148 +0000 UTC m=+0.040831679 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  9 10:05:41 compute-2 nova_compute[163961]: 2025-10-09 10:05:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:42 compute-2 nova_compute[163961]: 2025-10-09 10:05:42.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:43 compute-2 podman[178657]: 2025-10-09 10:05:43.210454547 +0000 UTC m=+0.044443199 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, config_id=multipathd)
Oct  9 10:05:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:44.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:44.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:46 compute-2 nova_compute[163961]: 2025-10-09 10:05:46.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:47 compute-2 nova_compute[163961]: 2025-10-09 10:05:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:48.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:48 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:48.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:05:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:50.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:50 compute-2 podman[178762]: 2025-10-09 10:05:50.234410404 +0000 UTC m=+0.067497368 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 10:05:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:51 compute-2 nova_compute[163961]: 2025-10-09 10:05:51.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:52 compute-2 nova_compute[163961]: 2025-10-09 10:05:52.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:05:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:05:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:05:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:54.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:55 compute-2 nova_compute[163961]: 2025-10-09 10:05:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:05:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:56.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:56 compute-2 nova_compute[163961]: 2025-10-09 10:05:56.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:57 compute-2 nova_compute[163961]: 2025-10-09 10:05:57.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:05:57 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  9 10:05:57 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  9 10:05:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:05:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:05:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:05:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:05:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:05:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:05:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2688699616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.552 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:05:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.750 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.751 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4808MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.751 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.752 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.877 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:05:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:05:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:00.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:06:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4016613573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.238 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.243 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.254 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.255 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.255 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.256 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.256 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  9 10:06:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:01 compute-2 podman[178895]: 2025-10-09 10:06:01.210426137 +0000 UTC m=+0.045969869 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.258 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.260 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.261 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.272 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:02.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:02 compute-2 nova_compute[163961]: 2025-10-09 10:06:02.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:02 compute-2 nova_compute[163961]: 2025-10-09 10:06:02.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:04.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:04 compute-2 nova_compute[163961]: 2025-10-09 10:06:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  9 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.187 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  9 10:06:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:06.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:06 compute-2 systemd[1]: session-40.scope: Deactivated successfully.
Oct  9 10:06:06 compute-2 systemd[1]: session-40.scope: Consumed 2min 4.734s CPU time, 646.9M memory peak, read 184.7M from disk, written 211.2M to disk.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Session 40 logged out. Waiting for processes to exit.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Removed session 40.
Oct  9 10:06:06 compute-2 systemd-logind[800]: New session 42 of user zuul.
Oct  9 10:06:06 compute-2 systemd[1]: Started Session 42 of User zuul.
Oct  9 10:06:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:06:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 11K writes, 3279 syncs, 3.42 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4248 writes, 15K keys, 4248 commit groups, 1.0 writes per commit group, ingest: 18.47 MB, 0.03 MB/s#012Interval WAL: 4248 writes, 1849 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  9 10:06:06 compute-2 systemd[1]: session-42.scope: Deactivated successfully.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Session 42 logged out. Waiting for processes to exit.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Removed session 42.
Oct  9 10:06:06 compute-2 systemd-logind[800]: New session 43 of user zuul.
Oct  9 10:06:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:06 compute-2 systemd[1]: Started Session 43 of User zuul.
Oct  9 10:06:06 compute-2 nova_compute[163961]: 2025-10-09 10:06:06.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:06 compute-2 systemd[1]: session-43.scope: Deactivated successfully.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Session 43 logged out. Waiting for processes to exit.
Oct  9 10:06:06 compute-2 systemd-logind[800]: Removed session 43.
Oct  9 10:06:07 compute-2 nova_compute[163961]: 2025-10-09 10:06:07.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:08.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:11 compute-2 nova_compute[163961]: 2025-10-09 10:06:11.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:12.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:12 compute-2 podman[178981]: 2025-10-09 10:06:12.227241098 +0000 UTC m=+0.057587263 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  9 10:06:12 compute-2 nova_compute[163961]: 2025-10-09 10:06:12.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:12.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:13 compute-2 podman[179023]: 2025-10-09 10:06:13.69198124 +0000 UTC m=+0.058999896 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct  9 10:06:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:14.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct  9 10:06:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:16.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:16 compute-2 nova_compute[163961]: 2025-10-09 10:06:16.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:17 compute-2 systemd[1]: Stopping User Manager for UID 1000...
Oct  9 10:06:17 compute-2 systemd[171508]: Activating special unit Exit the Session...
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped target Main User Target.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped target Basic System.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped target Paths.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped target Sockets.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped target Timers.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  9 10:06:17 compute-2 systemd[171508]: Closed D-Bus User Message Bus Socket.
Oct  9 10:06:17 compute-2 systemd[171508]: Stopped Create User's Volatile Files and Directories.
Oct  9 10:06:17 compute-2 systemd[171508]: Removed slice User Application Slice.
Oct  9 10:06:17 compute-2 systemd[171508]: Reached target Shutdown.
Oct  9 10:06:17 compute-2 systemd[171508]: Finished Exit the Session.
Oct  9 10:06:17 compute-2 systemd[171508]: Reached target Exit the Session.
Oct  9 10:06:17 compute-2 systemd[1]: user@1000.service: Deactivated successfully.
Oct  9 10:06:17 compute-2 systemd[1]: Stopped User Manager for UID 1000.
Oct  9 10:06:17 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct  9 10:06:17 compute-2 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct  9 10:06:17 compute-2 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct  9 10:06:17 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct  9 10:06:17 compute-2 systemd[1]: Removed slice User Slice of UID 1000.
Oct  9 10:06:17 compute-2 systemd[1]: user-1000.slice: Consumed 2min 5.130s CPU time, 653.1M memory peak, read 184.7M from disk, written 211.2M to disk.
Oct  9 10:06:17 compute-2 nova_compute[163961]: 2025-10-09 10:06:17.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:18.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:21 compute-2 podman[179049]: 2025-10-09 10:06:21.245398588 +0000 UTC m=+0.077178633 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  9 10:06:21 compute-2 nova_compute[163961]: 2025-10-09 10:06:21.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:22.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:22 compute-2 nova_compute[163961]: 2025-10-09 10:06:22.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:22.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:26 compute-2 nova_compute[163961]: 2025-10-09 10:06:26.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:27 compute-2 nova_compute[163961]: 2025-10-09 10:06:27.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:31 compute-2 nova_compute[163961]: 2025-10-09 10:06:31.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:32 compute-2 podman[179083]: 2025-10-09 10:06:32.209682712 +0000 UTC m=+0.045304793 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct  9 10:06:32 compute-2 nova_compute[163961]: 2025-10-09 10:06:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000021s ======
Oct  9 10:06:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct  9 10:06:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:36 compute-2 nova_compute[163961]: 2025-10-09 10:06:36.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:37 compute-2 nova_compute[163961]: 2025-10-09 10:06:37.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:41 compute-2 nova_compute[163961]: 2025-10-09 10:06:41.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:42 compute-2 nova_compute[163961]: 2025-10-09 10:06:42.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:43 compute-2 podman[179137]: 2025-10-09 10:06:43.222674486 +0000 UTC m=+0.048419491 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 10:06:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:44 compute-2 podman[179154]: 2025-10-09 10:06:44.208477694 +0000 UTC m=+0.044396436 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:06:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:06:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:06:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:46 compute-2 nova_compute[163961]: 2025-10-09 10:06:46.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:47 compute-2 nova_compute[163961]: 2025-10-09 10:06:47.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:48.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:51 compute-2 nova_compute[163961]: 2025-10-09 10:06:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:52 compute-2 podman[179179]: 2025-10-09 10:06:52.237508004 +0000 UTC m=+0.072257180 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  9 10:06:52 compute-2 nova_compute[163961]: 2025-10-09 10:06:52.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.700517483 +0000 UTC m=+0.028966739 container create ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:06:54 compute-2 systemd[1]: Started libpod-conmon-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope.
Oct  9 10:06:54 compute-2 systemd[1]: Started libcrun container.
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.746924269 +0000 UTC m=+0.075373545 container init ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.751484636 +0000 UTC m=+0.079933893 container start ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.752760732 +0000 UTC m=+0.081209988 container attach ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct  9 10:06:54 compute-2 magical_clarke[179538]: 167 167
Oct  9 10:06:54 compute-2 systemd[1]: libpod-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope: Deactivated successfully.
Oct  9 10:06:54 compute-2 conmon[179538]: conmon ba31dcb2b88856dbc775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope/container/memory.events
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.75941134 +0000 UTC m=+0.087860596 container died ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  9 10:06:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-982f31cdf356673884f8ebe54543c502073e113a5001422dd198eef4ad91db48-merged.mount: Deactivated successfully.
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.776660348 +0000 UTC m=+0.105109604 container remove ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.689162085 +0000 UTC m=+0.017611351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:54 compute-2 systemd[1]: libpod-conmon-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope: Deactivated successfully.
Oct  9 10:06:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.928982656 +0000 UTC m=+0.042325904 container create 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct  9 10:06:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:54 compute-2 systemd[1]: Started libpod-conmon-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope.
Oct  9 10:06:54 compute-2 systemd[1]: Started libcrun container.
Oct  9 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  9 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.991765644 +0000 UTC m=+0.105108892 container init 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  9 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.996735984 +0000 UTC m=+0.110079222 container start 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct  9 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.997990569 +0000 UTC m=+0.111333838 container attach 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  9 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:54.917561755 +0000 UTC m=+0.030905023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct  9 10:06:55 compute-2 suspicious_booth[179573]: [
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:    {
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "available": false,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "being_replaced": false,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "ceph_device_lvm": false,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "lsm_data": {},
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "lvs": [],
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "path": "/dev/sr0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "rejected_reasons": [
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "Insufficient space (<5GB)",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "Has a FileSystem"
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        ],
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        "sys_api": {
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "actuators": null,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "device_nodes": [
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:                "sr0"
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            ],
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "devname": "sr0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "human_readable_size": "474.00 KB",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "id_bus": "ata",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "model": "QEMU DVD-ROM",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "nr_requests": "64",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "parent": "/dev/sr0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "partitions": {},
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "path": "/dev/sr0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "removable": "1",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "rev": "2.5+",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "ro": "0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "rotational": "0",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "sas_address": "",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "sas_device_handle": "",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "scheduler_mode": "mq-deadline",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "sectors": 0,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "sectorsize": "2048",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "size": 485376.0,
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "support_discard": "2048",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "type": "disk",
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:            "vendor": "QEMU"
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:        }
Oct  9 10:06:55 compute-2 suspicious_booth[179573]:    }
Oct  9 10:06:55 compute-2 suspicious_booth[179573]: ]
Oct  9 10:06:55 compute-2 systemd[1]: libpod-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope: Deactivated successfully.
Oct  9 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:55.619427703 +0000 UTC m=+0.732770951 container died 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  9 10:06:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86-merged.mount: Deactivated successfully.
Oct  9 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:55.646712671 +0000 UTC m=+0.760055919 container remove 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  9 10:06:55 compute-2 systemd[1]: libpod-conmon-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope: Deactivated successfully.
Oct  9 10:06:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:06:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:06:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:56.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:56 compute-2 nova_compute[163961]: 2025-10-09 10:06:56.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:57 compute-2 nova_compute[163961]: 2025-10-09 10:06:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:06:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:06:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:06:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:06:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:06:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:58.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:06:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.189 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.216 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.217 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.217 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.218 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.218 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:06:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:06:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3815037012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:06:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.606 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.849 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4969MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.894 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.895 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.905 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:06:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:06:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:07:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2014211279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.291 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.295 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.309 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.310 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.311 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:00.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.294 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.295 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.295 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.348 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.507494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421507619, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2319, "num_deletes": 259, "total_data_size": 5771250, "memory_usage": 5869328, "flush_reason": "Manual Compaction"}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421521695, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3653044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28420, "largest_seqno": 30734, "table_properties": {"data_size": 3642593, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26278, "raw_average_key_size": 21, "raw_value_size": 3620093, "raw_average_value_size": 3006, "num_data_blocks": 273, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004276, "oldest_key_time": 1760004276, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14217 microseconds, and 11672 cpu microseconds.
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.521736) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3653044 bytes OK
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.521757) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522282) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522297) EVENT_LOG_v1 {"time_micros": 1760004421522292, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5759846, prev total WAL file size 5759846, number of live WAL files 2.
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.523227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3567KB)], [54(13MB)]
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421523281, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17779883, "oldest_snapshot_seqno": -1}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6468 keys, 17623706 bytes, temperature: kUnknown
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421565576, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17623706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17576986, "index_size": 29458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164677, "raw_average_key_size": 25, "raw_value_size": 17456901, "raw_average_value_size": 2698, "num_data_blocks": 1206, "num_entries": 6468, "num_filter_entries": 6468, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.565769) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17623706 bytes
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566177) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 419.8 rd, 416.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 13.5 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 7004, records dropped: 536 output_compression: NoCompression
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566194) EVENT_LOG_v1 {"time_micros": 1760004421566187, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42349, "compaction_time_cpu_micros": 27111, "output_level": 6, "num_output_files": 1, "total_output_size": 17623706, "num_input_records": 7004, "num_output_records": 6468, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421566697, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421568506, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.523165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:02 compute-2 nova_compute[163961]: 2025-10-09 10:07:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:02.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:03 compute-2 podman[180834]: 2025-10-09 10:07:03.217590763 +0000 UTC m=+0.044201430 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  9 10:07:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:04.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:04 compute-2 nova_compute[163961]: 2025-10-09 10:07:04.222 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:04.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:05 compute-2 nova_compute[163961]: 2025-10-09 10:07:05.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:05 compute-2 nova_compute[163961]: 2025-10-09 10:07:05.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:06 compute-2 nova_compute[163961]: 2025-10-09 10:07:06.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:07 compute-2 nova_compute[163961]: 2025-10-09 10:07:07.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:11 compute-2 nova_compute[163961]: 2025-10-09 10:07:11.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:12 compute-2 nova_compute[163961]: 2025-10-09 10:07:12.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:12 compute-2 nova_compute[163961]: 2025-10-09 10:07:12.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:13 compute-2 podman[180886]: 2025-10-09 10:07:13.877729703 +0000 UTC m=+0.042056616 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  9 10:07:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:15 compute-2 podman[180904]: 2025-10-09 10:07:15.212729055 +0000 UTC m=+0.041747414 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  9 10:07:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:16 compute-2 nova_compute[163961]: 2025-10-09 10:07:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:17 compute-2 nova_compute[163961]: 2025-10-09 10:07:17.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:18.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:20.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:21 compute-2 nova_compute[163961]: 2025-10-09 10:07:21.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:22 compute-2 nova_compute[163961]: 2025-10-09 10:07:22.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:23 compute-2 podman[180929]: 2025-10-09 10:07:23.248666086 +0000 UTC m=+0.077871786 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 10:07:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:24.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:26.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:26.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:26 compute-2 nova_compute[163961]: 2025-10-09 10:07:26.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:27 compute-2 nova_compute[163961]: 2025-10-09 10:07:27.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:28.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:30.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:30.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:31 compute-2 nova_compute[163961]: 2025-10-09 10:07:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:32.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:32 compute-2 nova_compute[163961]: 2025-10-09 10:07:32.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:33 compute-2 podman[180987]: 2025-10-09 10:07:33.940295152 +0000 UTC m=+0.038427165 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  9 10:07:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000020s ======
Oct  9 10:07:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct  9 10:07:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:36.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:36 compute-2 nova_compute[163961]: 2025-10-09 10:07:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:37 compute-2 nova_compute[163961]: 2025-10-09 10:07:37.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:38.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:38.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:40.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:40.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:41 compute-2 nova_compute[163961]: 2025-10-09 10:07:41.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:42.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:42 compute-2 nova_compute[163961]: 2025-10-09 10:07:42.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:44.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:44 compute-2 podman[181016]: 2025-10-09 10:07:44.228534415 +0000 UTC m=+0.057358673 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  9 10:07:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:46.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:46 compute-2 podman[181035]: 2025-10-09 10:07:46.224446211 +0000 UTC m=+0.059054341 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct  9 10:07:46 compute-2 nova_compute[163961]: 2025-10-09 10:07:46.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:47 compute-2 nova_compute[163961]: 2025-10-09 10:07:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:48.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.952588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468952632, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 662, "num_deletes": 251, "total_data_size": 1271281, "memory_usage": 1284576, "flush_reason": "Manual Compaction"}
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468957291, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 836219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30739, "largest_seqno": 31396, "table_properties": {"data_size": 832917, "index_size": 1210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7395, "raw_average_key_size": 19, "raw_value_size": 826465, "raw_average_value_size": 2124, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004422, "oldest_key_time": 1760004422, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 4750 microseconds, and 3738 cpu microseconds.
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957337) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 836219 bytes OK
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957360) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957724) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957736) EVENT_LOG_v1 {"time_micros": 1760004468957733, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957753) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1267680, prev total WAL file size 1267680, number of live WAL files 2.
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.958234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(816KB)], [57(16MB)]
Oct  9 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468958272, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18459925, "oldest_snapshot_seqno": -1}
Oct  9 10:07:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:48.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6346 keys, 16354784 bytes, temperature: kUnknown
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469006046, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16354784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16309863, "index_size": 27979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162794, "raw_average_key_size": 25, "raw_value_size": 16192899, "raw_average_value_size": 2551, "num_data_blocks": 1141, "num_entries": 6346, "num_filter_entries": 6346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006339) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16354784 bytes
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006919) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 385.6 rd, 341.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(41.6) write-amplify(19.6) OK, records in: 6857, records dropped: 511 output_compression: NoCompression
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006936) EVENT_LOG_v1 {"time_micros": 1760004469006927, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47878, "compaction_time_cpu_micros": 35803, "output_level": 6, "num_output_files": 1, "total_output_size": 16354784, "num_input_records": 6857, "num_output_records": 6346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469007317, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469010292, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.958186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:07:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:50.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:51 compute-2 nova_compute[163961]: 2025-10-09 10:07:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:52.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:52 compute-2 nova_compute[163961]: 2025-10-09 10:07:52.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:54 compute-2 podman[181084]: 2025-10-09 10:07:54.045378033 +0000 UTC m=+0.072465963 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct  9 10:07:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:07:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:07:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:56 compute-2 nova_compute[163961]: 2025-10-09 10:07:56.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:07:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:07:57 compute-2 nova_compute[163961]: 2025-10-09 10:07:57.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:07:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:07:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:07:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.174 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.195 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.197 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.197 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:07:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:07:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/700554729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.568 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:07:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.837 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4943MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.839 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.839 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.890 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.890 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.962 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:08:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:08:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1534538487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.324 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.329 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.339 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.341 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.341 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.339 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:08:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.184 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:08:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:03 compute-2 nova_compute[163961]: 2025-10-09 10:08:03.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:04 compute-2 podman[181266]: 2025-10-09 10:08:04.244170273 +0000 UTC m=+0.070929899 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  9 10:08:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:08:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:04.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:05 compute-2 nova_compute[163961]: 2025-10-09 10:08:05.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:05 compute-2 nova_compute[163961]: 2025-10-09 10:08:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:06 compute-2 nova_compute[163961]: 2025-10-09 10:08:06.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:07 compute-2 nova_compute[163961]: 2025-10-09 10:08:07.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:08:07 compute-2 nova_compute[163961]: 2025-10-09 10:08:07.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:08.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:08:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:10.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:11 compute-2 nova_compute[163961]: 2025-10-09 10:08:11.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:12 compute-2 nova_compute[163961]: 2025-10-09 10:08:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:14.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:15 compute-2 podman[181323]: 2025-10-09 10:08:15.215742648 +0000 UTC m=+0.044857175 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:08:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:16.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:16 compute-2 nova_compute[163961]: 2025-10-09 10:08:16.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:17 compute-2 podman[181341]: 2025-10-09 10:08:17.218460746 +0000 UTC m=+0.048203735 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  9 10:08:17 compute-2 nova_compute[163961]: 2025-10-09 10:08:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:18.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:18.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.736 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.770 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:08:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:22 compute-2 nova_compute[163961]: 2025-10-09 10:08:22.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:22.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:24 compute-2 podman[181366]: 2025-10-09 10:08:24.231612449 +0000 UTC m=+0.062669927 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:08:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:25 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:25.714 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  9 10:08:25 compute-2 nova_compute[163961]: 2025-10-09 10:08:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:25 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:25.715 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  9 10:08:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:26.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:26 compute-2 nova_compute[163961]: 2025-10-09 10:08:26.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:27 compute-2 nova_compute[163961]: 2025-10-09 10:08:27.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:28.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:29.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:31 compute-2 nova_compute[163961]: 2025-10-09 10:08:31.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:32 compute-2 nova_compute[163961]: 2025-10-09 10:08:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:33 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:33.718 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  9 10:08:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:35 compute-2 podman[181426]: 2025-10-09 10:08:35.250645769 +0000 UTC m=+0.077976823 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:08:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:36 compute-2 nova_compute[163961]: 2025-10-09 10:08:36.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:37 compute-2 nova_compute[163961]: 2025-10-09 10:08:37.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:39.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:41 compute-2 nova_compute[163961]: 2025-10-09 10:08:41.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:42 compute-2 nova_compute[163961]: 2025-10-09 10:08:42.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:45.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:46 compute-2 podman[181454]: 2025-10-09 10:08:46.222918295 +0000 UTC m=+0.050152739 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  9 10:08:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:46.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:46 compute-2 nova_compute[163961]: 2025-10-09 10:08:46.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:47 compute-2 nova_compute[163961]: 2025-10-09 10:08:47.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:48 compute-2 podman[181472]: 2025-10-09 10:08:48.22134982 +0000 UTC m=+0.051577365 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct  9 10:08:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:49.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:51 compute-2 nova_compute[163961]: 2025-10-09 10:08:51.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:52.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:52 compute-2 nova_compute[163961]: 2025-10-09 10:08:52.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:08:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:08:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:55 compute-2 podman[181521]: 2025-10-09 10:08:55.228102504 +0000 UTC m=+0.059947635 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  9 10:08:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:56 compute-2 nova_compute[163961]: 2025-10-09 10:08:56.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:57 compute-2 nova_compute[163961]: 2025-10-09 10:08:57.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:08:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:08:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:08:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:08:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:08:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:08:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:08:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.174 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.195 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1526071422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.570 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.803 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.805 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4959MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.805 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.806 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.858 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.858 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.870 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:09:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:01.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:09:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1405609489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.245 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.251 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.261 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.262 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.262 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:09:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:03.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:04.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:05.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:09:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:06 compute-2 nova_compute[163961]: 2025-10-09 10:09:06.176 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:06 compute-2 podman[181678]: 2025-10-09 10:09:06.219591465 +0000 UTC m=+0.047638360 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:09:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:06 compute-2 nova_compute[163961]: 2025-10-09 10:09:06.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:07 compute-2 nova_compute[163961]: 2025-10-09 10:09:07.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:07 compute-2 nova_compute[163961]: 2025-10-09 10:09:07.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:09 compute-2 nova_compute[163961]: 2025-10-09 10:09:09.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:09:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:09:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:11 compute-2 nova_compute[163961]: 2025-10-09 10:09:11.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:12 compute-2 nova_compute[163961]: 2025-10-09 10:09:12.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:09:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:12 compute-2 nova_compute[163961]: 2025-10-09 10:09:12.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:13.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:15.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:16 compute-2 nova_compute[163961]: 2025-10-09 10:09:16.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:17 compute-2 podman[181756]: 2025-10-09 10:09:17.210880482 +0000 UTC m=+0.042837128 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  9 10:09:17 compute-2 nova_compute[163961]: 2025-10-09 10:09:17.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:19 compute-2 podman[181776]: 2025-10-09 10:09:19.217661504 +0000 UTC m=+0.049354336 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  9 10:09:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:22 compute-2 nova_compute[163961]: 2025-10-09 10:09:22.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:22 compute-2 nova_compute[163961]: 2025-10-09 10:09:22.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:23.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:26 compute-2 podman[181800]: 2025-10-09 10:09:26.23137145 +0000 UTC m=+0.062357709 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  9 10:09:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:26.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:27 compute-2 nova_compute[163961]: 2025-10-09 10:09:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:27.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:27 compute-2 nova_compute[163961]: 2025-10-09 10:09:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:28.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:30.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:31.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:32 compute-2 nova_compute[163961]: 2025-10-09 10:09:32.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:32.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:32 compute-2 nova_compute[163961]: 2025-10-09 10:09:32.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:36.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:37 compute-2 nova_compute[163961]: 2025-10-09 10:09:37.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:37 compute-2 podman[181859]: 2025-10-09 10:09:37.218447914 +0000 UTC m=+0.046788489 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  9 10:09:37 compute-2 nova_compute[163961]: 2025-10-09 10:09:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:38.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:41.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:42 compute-2 nova_compute[163961]: 2025-10-09 10:09:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:42 compute-2 nova_compute[163961]: 2025-10-09 10:09:42.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:47 compute-2 nova_compute[163961]: 2025-10-09 10:09:47.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:47 compute-2 nova_compute[163961]: 2025-10-09 10:09:47.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:48 compute-2 podman[181888]: 2025-10-09 10:09:48.216995723 +0000 UTC m=+0.045402645 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct  9 10:09:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:50 compute-2 podman[181906]: 2025-10-09 10:09:50.216588283 +0000 UTC m=+0.049730594 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  9 10:09:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:09:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:09:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:51 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:52 compute-2 nova_compute[163961]: 2025-10-09 10:09:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:52 compute-2 nova_compute[163961]: 2025-10-09 10:09:52.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:52 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:53 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:09:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:09:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:54 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:55 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:56.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:56 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:57 compute-2 nova_compute[163961]: 2025-10-09 10:09:57.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:57 compute-2 podman[181955]: 2025-10-09 10:09:57.23203865 +0000 UTC m=+0.065297371 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:09:57 compute-2 nova_compute[163961]: 2025-10-09 10:09:57.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:09:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:57 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:58.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:58 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:09:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:09:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:09:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:09:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:59 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:00 compute-2 ceph-mon[5983]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct  9 10:10:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:00 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1012934223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.553 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.792 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.794 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4971MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.795 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.795 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.845 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.845 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.858 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.871 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.871 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.884 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.898 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  9 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.911 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  9 10:10:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:01 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct  9 10:10:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1023247151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.274 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.279 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.291 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.292 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.292 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:02 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.293 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.293 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.294 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.304 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.304 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  9 10:10:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:03 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:04 compute-2 nova_compute[163961]: 2025-10-09 10:10:04.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:04 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:05 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:06 compute-2 nova_compute[163961]: 2025-10-09 10:10:06.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:06.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:06 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:07 compute-2 nova_compute[163961]: 2025-10-09 10:10:07.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct  9 10:10:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct  9 10:10:07 compute-2 nova_compute[163961]: 2025-10-09 10:10:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:07 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:08 compute-2 systemd[1]: Starting system activity accounting tool...
Oct  9 10:10:08 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct  9 10:10:08 compute-2 systemd[1]: Finished system activity accounting tool.
Oct  9 10:10:08 compute-2 podman[182033]: 2025-10-09 10:10:08.221188073 +0000 UTC m=+0.049933146 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct  9 10:10:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:08 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000020s ======
Oct  9 10:10:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct  9 10:10:09 compute-2 podman[182158]: 2025-10-09 10:10:09.139579354 +0000 UTC m=+0.047025715 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:10:09 compute-2 nova_compute[163961]: 2025-10-09 10:10:09.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:09 compute-2 nova_compute[163961]: 2025-10-09 10:10:09.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  9 10:10:09 compute-2 podman[182175]: 2025-10-09 10:10:09.285961257 +0000 UTC m=+0.049960968 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  9 10:10:09 compute-2 podman[182158]: 2025-10-09 10:10:09.290347927 +0000 UTC m=+0.197794288 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  9 10:10:09 compute-2 podman[182237]: 2025-10-09 10:10:09.587861076 +0000 UTC m=+0.047066803 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:10:09 compute-2 podman[182237]: 2025-10-09 10:10:09.597060719 +0000 UTC m=+0.056266427 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct  9 10:10:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:09 compute-2 podman[182337]: 2025-10-09 10:10:09.944570163 +0000 UTC m=+0.039336229 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 10:10:09 compute-2 podman[182337]: 2025-10-09 10:10:09.955030412 +0000 UTC m=+0.049796479 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:09 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:10 compute-2 podman[182390]: 2025-10-09 10:10:10.122957903 +0000 UTC m=+0.040094930 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.28.2, release=1793)
Oct  9 10:10:10 compute-2 podman[182390]: 2025-10-09 10:10:10.135249356 +0000 UTC m=+0.052386392 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20)
Oct  9 10:10:10 compute-2 podman[182433]: 2025-10-09 10:10:10.268337536 +0000 UTC m=+0.043357840 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  9 10:10:10 compute-2 podman[182433]: 2025-10-09 10:10:10.285113412 +0000 UTC m=+0.060133707 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct  9 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.288 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  9 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.289 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  9 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.289 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  9 10:10:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:10 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.590374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611590438, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1953, "num_deletes": 504, "total_data_size": 4267432, "memory_usage": 4338096, "flush_reason": "Manual Compaction"}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611599765, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2787071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31401, "largest_seqno": 33349, "table_properties": {"data_size": 2779283, "index_size": 4154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18929, "raw_average_key_size": 18, "raw_value_size": 2761688, "raw_average_value_size": 2764, "num_data_blocks": 179, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004469, "oldest_key_time": 1760004469, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 9512 microseconds, and 7920 cpu microseconds.
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.599874) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2787071 bytes OK
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.599920) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600382) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600406) EVENT_LOG_v1 {"time_micros": 1760004611600396, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4257661, prev total WAL file size 4257661, number of live WAL files 2.
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.601753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2721KB)], [60(15MB)]
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611601820, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19141855, "oldest_snapshot_seqno": -1}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6318 keys, 13636272 bytes, temperature: kUnknown
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611644324, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 13636272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13594858, "index_size": 24536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 164389, "raw_average_key_size": 26, "raw_value_size": 13481425, "raw_average_value_size": 2133, "num_data_blocks": 975, "num_entries": 6318, "num_filter_entries": 6318, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644581) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13636272 bytes
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 449.4 rd, 320.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 15.6 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.9) OK, records in: 7345, records dropped: 1027 output_compression: NoCompression
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644910) EVENT_LOG_v1 {"time_micros": 1760004611644902, "job": 36, "event": "compaction_finished", "compaction_time_micros": 42593, "compaction_time_cpu_micros": 30293, "output_level": 6, "num_output_files": 1, "total_output_size": 13636272, "num_input_records": 7345, "num_output_records": 6318, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611645452, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611647933, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.601690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  9 10:10:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:11 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  9 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  9 10:10:12 compute-2 nova_compute[163961]: 2025-10-09 10:10:12.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:12 compute-2 nova_compute[163961]: 2025-10-09 10:10:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:12 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:13 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:14 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:15 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:16 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct  9 10:10:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:16 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:17 compute-2 nova_compute[163961]: 2025-10-09 10:10:17.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:17 compute-2 nova_compute[163961]: 2025-10-09 10:10:17.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:17 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:18 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:19 compute-2 podman[182624]: 2025-10-09 10:10:19.220965888 +0000 UTC m=+0.051826114 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct  9 10:10:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:19 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:20 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:21 compute-2 podman[182642]: 2025-10-09 10:10:21.227565518 +0000 UTC m=+0.053750743 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  9 10:10:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:21 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:22 compute-2 nova_compute[163961]: 2025-10-09 10:10:22.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:22 compute-2 nova_compute[163961]: 2025-10-09 10:10:22.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:22 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:23 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:24 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:25 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:26 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:27 compute-2 nova_compute[163961]: 2025-10-09 10:10:27.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:27 compute-2 nova_compute[163961]: 2025-10-09 10:10:27.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:27 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:28 compute-2 podman[182666]: 2025-10-09 10:10:28.238765442 +0000 UTC m=+0.070060629 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  9 10:10:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:28 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:29 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:30 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:31 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:32 compute-2 nova_compute[163961]: 2025-10-09 10:10:32.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:32 compute-2 nova_compute[163961]: 2025-10-09 10:10:32.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:32 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:33 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:34 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:35 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:36 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:37 compute-2 nova_compute[163961]: 2025-10-09 10:10:37.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:37 compute-2 nova_compute[163961]: 2025-10-09 10:10:37.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:37 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:38 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct  9 10:10:38 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  9 10:10:38 compute-2 systemd-logind[800]: New session 44 of user zuul.
Oct  9 10:10:38 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  9 10:10:38 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct  9 10:10:38 compute-2 podman[182730]: 2025-10-09 10:10:38.312645286 +0000 UTC m=+0.065372083 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  9 10:10:38 compute-2 systemd[182729]: Queued start job for default target Main User Target.
Oct  9 10:10:38 compute-2 systemd[182729]: Created slice User Application Slice.
Oct  9 10:10:38 compute-2 systemd[182729]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  9 10:10:38 compute-2 systemd[182729]: Started Daily Cleanup of User's Temporary Directories.
Oct  9 10:10:38 compute-2 systemd[182729]: Reached target Paths.
Oct  9 10:10:38 compute-2 systemd[182729]: Reached target Timers.
Oct  9 10:10:38 compute-2 systemd[182729]: Starting D-Bus User Message Bus Socket...
Oct  9 10:10:38 compute-2 systemd[182729]: Starting Create User's Volatile Files and Directories...
Oct  9 10:10:38 compute-2 systemd[182729]: Listening on D-Bus User Message Bus Socket.
Oct  9 10:10:38 compute-2 systemd[182729]: Reached target Sockets.
Oct  9 10:10:38 compute-2 systemd[182729]: Finished Create User's Volatile Files and Directories.
Oct  9 10:10:38 compute-2 systemd[182729]: Reached target Basic System.
Oct  9 10:10:38 compute-2 systemd[182729]: Reached target Main User Target.
Oct  9 10:10:38 compute-2 systemd[182729]: Startup finished in 122ms.
Oct  9 10:10:38 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct  9 10:10:38 compute-2 systemd[1]: Started Session 44 of User zuul.
Oct  9 10:10:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:38 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:39 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:40.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:40 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct  9 10:10:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct  9 10:10:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct  9 10:10:41 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3566203389' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  9 10:10:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:41 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:42 compute-2 nova_compute[163961]: 2025-10-09 10:10:42.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:42.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:42 compute-2 nova_compute[163961]: 2025-10-09 10:10:42.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:42 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:43 compute-2 ovs-vsctl[183043]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  9 10:10:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:43 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  9 10:10:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:44.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  9 10:10:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:44 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: cache status {prefix=cache status} (starting...)
Oct  9 10:10:44 compute-2 lvm[183322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  9 10:10:44 compute-2 lvm[183322]: VG ceph_vg0 finished
Oct  9 10:10:44 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: client ls {prefix=client ls} (starting...)
Oct  9 10:10:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:44 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: damage ls {prefix=damage ls} (starting...)
Oct  9 10:10:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct  9 10:10:45 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1073987092' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  9 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump loads {prefix=dump loads} (starting...)
Oct  9 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  9 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  9 10:10:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:45 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  9 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4029739969' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462442156' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:10:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1160818251' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  9 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  9 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct  9 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/895555260' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  9 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: ops {prefix=ops} (starting...)
Oct  9 10:10:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:46 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct  9 10:10:47 compute-2 nova_compute[163961]: 2025-10-09 10:10:47.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2256829641' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  9 10:10:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091719402' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3843149761' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:47 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: session ls {prefix=session ls} (starting...)
Oct  9 10:10:47 compute-2 nova_compute[163961]: 2025-10-09 10:10:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  9 10:10:47 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: status {prefix=status} (starting...)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435942827' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2847655966' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  9 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3743974822' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:47 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/382053438' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:48.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3590603568' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct  9 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2856552148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  9 10:10:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:48 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct  9 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3505807830' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  9 10:10:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct  9 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/428440663' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  9 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct  9 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2743014829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  9 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  9 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct  9 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636638593' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  9 10:10:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:49 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3428916114' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct  9 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758321859' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  9 10:10:50 compute-2 podman[184192]: 2025-10-09 10:10:50.256128681 +0000 UTC m=+0.085319597 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  9 10:10:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct  9 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2786971522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  9 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct  9 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1824377817' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  9 10:10:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:50 2025: (VI_0) received an invalid passwd!
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 1638400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xd0256/0x160000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 116, src has [1,118]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1589248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777465 data_alloc: 218103808 data_used: 24576
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1556480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789067 data_alloc: 218103808 data_used: 40960
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab0000/0x0/0x4ffc00000, data 0xd813e/0x16c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1531904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005482674s of 10.049218178s, submitted: 44
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1515520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xdc316/0x172000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1458176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811691 data_alloc: 218103808 data_used: 45056
Oct  9 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1449984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa0000/0x0/0x4ffc00000, data 0xe23ae/0x17b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1417216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e(unlocked)] enter Initial
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000048
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000161 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1392640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820599 data_alloc: 218103808 data_used: 53248
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.767282 2 0.000052
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.767477 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.767500 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000084 1 0.000131
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1376256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1343488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.973302841s of 11.008710861s, submitted: 32
Oct  9 10:10:51 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct  9 10:10:51 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.920757 5 0.000039
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.663154 94 0.002040
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.664701 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 46.669357 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 46.669400 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Reset 0.000316 1 0.000677
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Start 0.000047 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started/Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002385 4 0.000096
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000045
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035652 1 0.000093
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069951 1 0.000072
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.108230 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029027 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000197 1 0.000281
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.107543 3 0.000118
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.107638 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000128 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000555
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000220
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct  9 10:10:51 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Oct  9 10:10:51 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001188 3 0.000093
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0xe8562/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1318912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015862 4 0.000066
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015980 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014927 2 0.000066
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016367 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001208 3 0.000092
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.188305 5 0.000535
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000101 1 0.000097
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000646 1 0.000096
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.069841 2 0.000086
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xec658/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.466386 1 0.000109
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.725557 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.741565 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.741587 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Reset 0.000091 1 0.000158
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Start
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Start 0.000006 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started/Stray
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1302528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7274f00
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007853 7 0.000095
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000067 1 0.000098
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 DELETING pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038244 2 0.000163
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038378 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046311 0 0.000000
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1212416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1171456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1155072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1146880 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1138688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1130496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1105920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6ded0e0
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd6d6f2c0
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1089536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1064960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.992691040s of 25.019613266s, submitted: 36
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835106 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1015808 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1007616 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835778 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.494864464s of 10.498138428s, submitted: 2
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 991232 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd5d7f0e0
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 909312 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 827392 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.944450378s of 85.945846558s, submitted: 1
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 573440 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd5d7ed20
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.127048492s of 54.128883362s, submitted: 1
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 270336 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 147456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 32768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 24576 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1007616 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 991232 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 983040 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 966656 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 958464 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 917504 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 827392 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 761856 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 753664 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 704512 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 655360 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 630784 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 622592 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 606208 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 573440 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s#012Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.829528809s of 215.830673218s, submitted: 1
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 90112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 933888 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 925696 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 843776 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct  9 10:10:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct  9 10:10:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 720896 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 581632 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 516096 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 466944 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 344064 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 245760 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 188416 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  9 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct  9 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct  9 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
